With
consumers making their own prime-time schedule on multiple devices each
night of the week, reaching an audience with video content has never
been trickier for advertisers. Even with a massive TV ad budget,
on-demand services and cord-cutting bundles offer whole segments of the
population a way around old-fashioned ads.
Facing these challenges, media buyers have to see the big picture
when deciding when and where to run digital video, and the landscape is
changing by the day. The best approach must take into account each
variable in content production, broadcasting and promotion. The 2016 NAB
Show features several educational opportunities on these topics,
including a Super Session, “Where’s All the Ad Revenue? Media Strategies Unleashed in a Dynamic Multiplatform Environment,”
with special presenters Jason Schragger, chief creative officer of
Saatchi & Saatchi LA, and Chris Macdonald, president of McCann New
York. And all NAB Show attendees can visit The Advanced Advertising Theater, a must-see attraction on the Show Floor that will host a series of Ad Tech-focused presentations.
Where the Eyeballs Are
Finding the eyeballs a client needs has never been a tougher
challenge, so it all begins where digital content is actually being
consumed. According to eMarketer research, U.S. consumers view half their digital video content
on mobile devices as of 2015. Meanwhile, only one-third of the ad spend
on this content was devoted to mobile. The research says it will take
until 2019 for desktop and mobile to get equal dollars for video.
If you have an idea where your target consumes content, you start
with an advantage. New channels are launching by the day, as are
streaming services, apps and cord-cutting packages. Staying on top of
the audiences viewing content in these mediums gives agencies an upper
hand when looking to place a spot on behalf of a client.
According to Michael Klein of Conde Nast Entertainment, getting the video to the biggest audience involves creating content channels
as well as fully monetizing video through distribution via Apple TV,
Roku, Xbox and Web services such as AOL or Twitter. Millennials clearly
prefer on-demand services, and YouTube continues to be a driving force,
yet new channels are emerging from companies both massive and small.
Delivering Quality Video Content
Even with the ideal landing spot in mind, agencies have to depend on a
production solution that works for the mediums in question, and that
may come from an in-house content team. For example, The New York Times
has taken to producing videos for advertisers to roll on its website and
app.
Looking at the social media landscape millennials frequent, agencies
can take stock of the leading producers in each field. In the case of
Vine, content producers with a large following are open to being hired
by major brands, as long as the terms are right. Bear in mind that
standard rules apply to these partnerships. As Vine star Ry Doon has
noted, a six-second video could take eight hours to produce.
If there were one factor to rank above the rest, level of engagement
would be it. Seeing consumers respond passionately to video content is
the goal of any spot, as they immediately look to consume more on the
same channel. The brand associated with this quality product becomes an
instant winner.
If you’re an advertising agency grappling with the overwhelming landscape of digital video, NAB Show, April 16-21 in Las Vegas, has everything you need to know from programmatic to social media to media planning. Use code BA52 in registration before April 1 for a free Exhibits Pass, which gives you access to exhibits, including The Advanced Advertising Theatre, and general and info sessions. For access to your choice of three sessions from the Conference Program, you can add a Session 3-Pack for just $150.
On
a crisp morning last autumn, an old media behemoth provided an enormous
boost for a next-generation medium – virtual reality. On November 5,
2015, The New York Times delivered 1.2 million Google Cardboard VR
viewers to subscriber doorsteps, breaking a huge barrier. All users had
to do was add their smartphone, and they were able to participate in
this new experience.
When VR came roaring back into public consciousness several years ago
with early demos of headsets such as the Oculus Rift, it was perceived
as primarily a gadget for gamers. Now Google, The New York Times and
others have helped to change that, and consumers are gravitating towards
the more accessible options. If a VR experience can be constructed from
a smartphone and a cardboard box, almost anyone can make one.
Tech leaders continue to make VR more accessible to a wider audience.
In fact, Samsung is currently offering customers a free Galaxy Gear VR
headset and game bundle with the purchase of a new Galaxy S7 or S7 Edge
smartphone.
The next move is up to the content producers. Since almost anyone can
get a viewer or dock, and many people already have a smartphone, it’s
up to VR creators to provide the “why” for this technology.
The low cost barrier means VR could revolutionize all kinds of
fields, including education, news and entertainment. Google, for
example, is already taking kids around the world on virtual field trips
using Cardboard, enabling kids in San Francisco to tour a mangrove
forest or swim with great white sharks.
The 2016 NAB Show in Las Vegas in April features enough programs
focused on VR production and content creation that it is an essential
destination for anyone interested in this rapidly evolving field. Some
sessions, like “Live Streaming Virtual Reality,” “Post-Produced Virtual
Reality” and “The Current State of Virtual Reality” bring industry
insiders, innovators and academics together to discuss the latest
trends. These include market projections and recent hardware and
software developments as well as some perspective on the business
opportunities for content creators.
The NAB Show even includes a Virtual & Augmented Reality
Pavilion, where you can see how the world as we know it is going through
enormous shifts. You can get your hands on the latest augmented and
virtual reality equipment and software. You’ll also learn how the new
medium is impacting all aspects of filmmaking, storytelling, cameras,
lighting, sound, production, special effects (VFX), editing,
distribution, coding and consumption.
For the more cinematic-minded, the Kaleidoscope VR Showcase will
feature groundbreaking virtual reality films and immersive experiences
hailing from North America, Europe, South America and beyond.
Kaleidoscope is a visionary studio empowering independent VR artists
around the globe. For the first time in the show’s history, NAB Show
attendees will have the opportunity to travel to virtual worlds and
experience the most innovative narrative, environmental and interactive
content.
“With the rapid evolution of media and entertainment, it’s clear that
virtual reality will play a prominent role in the future of film and
broadcasting,” Chris Brown, NAB Executive Vice President of Conventions
and Business Operations, said in a statement. “Kaleidoscope will be a
major highlight of our virtual reality educational programming – we
can’t wait to see what this next generation of artists brings to the NAB
Show floor.”
Other programs will dive into specific use cases, including the
sessions “Being There – Virtual Reality News and Documentaries” and
“Building Worlds – Cinematic Storytelling in VR and AR.” Augmented
reality, which is more focused on adding value to the real world via
headsets than creating a completely immersed experience, will also be
discussed in-depth at the “Augmented Reality: The Merger of Content and
Interaction” session.
VR was undeniably gimmicky when it first captured the public’s
imagination in the 1980s, and it remained pretty gimmicky when it
resurfaced in the late 2000s. But with the lower barrier for entry and
almost no limit to its uses, VR is now one of the most exciting tech
breakthroughs in recent years, opening up new opportunities every day
for the latest generation of content creators.
Virtual and augmented reality may be the future, but NAB Show can prepare you for it now. Learn how you can become a producer of this amazing content by checking out the NAB Show April 16 to 21 in Las Vegas.
Drones.
Virtual reality. “Cord nevers.” The key media and entertainment
industry trends that will drive discussions and presentations at the
2016 NAB Show, April 16 – 21 in Las Vegas, showcase how much technology
is changing this decade – and how content consumers and their tastes are
changing with it. Returning trends such as the evolution of 4K are
still very much in the limelight, as well. Together, these trends make
it one of the most engaging times to be working in the industry.
Virtual Reality, Augmented Reality, Immersive Content and 360-Degree Video
Research by the Consumer Electronics Association expects sales of VR
headsets to reach 1.2 million units over 2015, a 500 percent increase
over last year; associated revenues are projected to exceed $540
million. Immersive content, though still in its early days, will play a
big role in NAB Show discussions. By 2020, the VR/AR market is projected
to reach $150 billion, according to a recent report from Manatt Digital
Media. Though VR, AR and 360 allow for more immersive experiences,
there are quite a few impediments to conquer before it becomes
mainstream – from headset accessibility, comfort and pricing to learning
how to maintain audience attention; tackling production challenges;
figuring out the role of advertising; and more.
The Era of Drones
The drone landscape has significantly matured since last year, with
the global commercial drone market expected to reach $2.07 billion by
2022. Last year, the FAA approved drone use on film and TV sets.
Presently, nearly 10 percent of all productions use unmanned aerial
vehicles (UAVs), including “Criminal Minds,” “The Leftovers,” “Narcos”
and “Supergirl”; Audi, Tesla, Chrysler and Nike commercials; films and
more. Content showcases dedicated to the practice are surfacing – from
the New York Film Festival and Festival CinéDrones in France to the
International Drone Film Festival in the UK. Major tech innovators are
capitalizing on the trend, with companies including Facebook and Google
racing to stake their claim in the market with smaller, affordable,
higher quality drones.
Social Media Is Redefining the Ratings Game
Consumers today spend almost as much time viewing media as they do
working a full-time job, watching an average of 38 hours of video per
week. As demand for increasingly portable entertainment rises, content
providers and advertisers face an arduous battle as they compete with
the second-screen for audience time. Watching video is no longer a
linear activity but a communal experience, with social media connecting
audiences viewing live, streamed and on-demand content. It also provides
a real-time feedback opportunity for advertisers, who can measure
activity and fan engagement on social networks such as Twitter to gauge
audience reception to ads. About 86 percent of Facebook users currently
interact with a second-screen while watching programs, with activity
peaking around prime time, and Twitter users are 66 percent more likely
than average to interact with the online content of a show. Last year,
Nielsen announced a plan to overhaul its audience measurement system
with new metrics spanning popular platforms and devices for advertisers,
buyers and networks before the end of the year.
The Evolving OTT Landscape: From ‘Cord Cutters’ to ‘Cord Nevers’
With more than 181 million people in the US predicted to watch video
via an app or website this year, there’s no denying that over-the-top
content (OTT) is mainstream; though its arrival has been controversial.
While cord cutters currently make up less than 10 percent of Americans,
their ranks are increasing, along with “cord nevers,” an emerging
category of viewers who have never paid for a cable subscription. A
Forrester research report released last fall forecasts that half of
adults under the age of 32 will not pay for TV by 2025. The future of
OTT will likely be largely subscription based – of the estimated $185
billion the overall TV and video market made last year, subscriptions
were responsible for $104 billion!
Ultra HD and High Dynamic Range: A Time of Transition
Last year was a time of evolutionary developments in Ultra HD (UHD)
and High Dynamic Range (HDR) – from UHD TV sales growth to newly
announced HDR TVs, expanded 4K and HDR content offerings and the launch
of dedicated 4K UHD channels. France Television, Sky Deutschland and BT
Sport successfully delivered live 4K UHD sports broadcasts this year,
and NHK will be running a series of 8K UHD broadcast tests in 2016 in
time for the Rio Summer Olympics. Currently, most 4K content is
available via streaming services such as Netflix, which revealed 14 new
4K shows and films slated for release through the end of 2016. Live
broadcast is catching up – with NASA already running test broadcasts for
a new 4K UHD TV channel.
IP: Redefining Live Production
The shift from Serial Digital Interface (SDI) – the first family of
digital video interfaces standardized by SMPTE – to IP in live
production is slowly getting underway but is anticipated to be one of
the most disruptive transformations in industry history. IP is opening
new doors for flexible, remote live HD, UHD and HDR production workflows
and next-gen over-the-air transmissions. Fox is already working on
20/20 visions – its live IP implementation, which was recently tapped
for the Women’s World Cup in Canada. Using Adobe Creative Cloud, the
studio’s teams were easily able to share and edit content between LA and
Vancouver facilities. IP adoption is still in the early stages for live
production, and several questions remain, especially regarding which
environments make the most sense for IP.
It can be a challenge to keep up with the rapid pace of change in the industry; make sure you’re in tune with the latest trends by attending NAB Show in Las Vegas.
Both
terrestrial and extraterrestrial technologies — notably wireless and
satellite — promise to contribute to the meteoric rise of the Internet
of Things (IoT).
Market research and consulting firm NSR predicts satellite’s
part of the IoT market and the related machine-to-machine (M2M) sector
will exceed $2.4 billion in 2024, up from $1.4 billion in 2014. IoT and
M2M via satellite have “significant room for growth,” NSR says,
particularly in sectors such as shipping, agriculture, land
transportation and government.
Certainly, low-earth-orbit satellites (LEOs) and other NewSpace
innovations will come into play in the IoT and M2M growth, as Google and
others seek to capitalize on their heavy extraterrestrial investments.
With 50 billion Internet-connected devices expected to be in
operation in 2020, including computers, smartphones and tablets, both
satellite and wireless will coexist in the IoT market. NSR says that
“even a small sliver of devices being connected to satellite networks
will allow the consumer IoT market to grow well into the future.”
What does the IoT future look like? In the consumer realm, experts
envision “everyday objects” such as cars, homes and kitchen appliances
communicating with one another — sharing data and completing tasks.
According to market research company Gartner, the typical home could
contain more than 500 IoT-friendly “smart” devices by 2022.
IoT also holds great potential in the business sector. For instance,
IoT opportunities for audio and video distribution in the business world
are “substantial,” according to Jimmy Schaeffler, chairman and chief
service officer at The Carmel Group, a consulting firm whose specialties
include satellite TV and satellite radio. Schaeffler will moderate a
Content & Communications World/SATCON discussion Nov. 11 about the
Internet of Things.
For IoT-related businesses, there’s a lot at stake. Market research
company IDC predicts global spending on IoT — primarily on devices,
connectivity and IT services — will reach $1.7 trillion in 2020, up from
$655.8 billion in 2014.
One company placing a big bet on satellite- and wireless-linked IoT
is Sigfox, a French startup that specializes in low-cost wireless
networks.
Through something called the Mustang project, Sigfox, along with
partners Airbus Defense and Space, CEA-Leti and Sysmeca, is working on a
hybrid system of earth- and sky-based technologies to enable IoT
connectivity all over the planet.
“Seamless and constant Internet of Things connectivity between
continents and over the oceans will be a giant step toward realizing the
IoT’s full potential,” Sigfox CEO Ludovic Le Moan said.
As for the role of wireless in IoT, experts say modern-day networks
won’t cut it. Current bandwidth does not meet the needs of large-scale
IoT rollouts, according to the Wireless IoT Forum, because of power
restrictions, communication interference, regulatory constraints and
other roadblocks.
“It is clear the Internet of Things is a key technology to boost
productivity, alleviate key societal challenges, improve our working
lives and to deliver growth and employment,” said William Webb, CEO of
the Wireless IoT Forum. “For these reasons, it merits a higher level of
regulatory attention than many other wireless applications.”
Webb said that although the IoT market is “gathering significant
momentum” around the world, fragmentation in the wireless market
presents a “very real” risk to the widespread delivery of IoT services.
“Without widely-agreed-open standards, we risk seeing pockets of
proprietary technology developing independently, preventing the benefits
of mass-market scale,” Webb said.
As those standards are being worked out, satellite innovators will be working on taking advantage of what NSR says are the “lucrative opportunities” presented by IoT and M2M.
Five
years ago, on my podcast Digital Production Buzz, Michael Kammes was
talking about disruptive technologies in media. At the time we were
deeply in the middle of the transition between film, tape and digital
media. Michael, who was then the Workflow Architect for Keycode Media,
was dealing with this on a daily basis as we moved from physical media
to files stored on hard disks.
The disruptive power of change; the phrase rang in my memory. If we
thought the industry was in turmoil five years ago, the turmoil of
change has become pervasive today.
Recently, at a presentation of the LA Creative Pro User Group, Randy
Ubillos, the chief engineering architect for Apple Final Cut Pro and
iMovie was asked to project the future of media. Randy laughed and
replied: “Faster, smaller, cheaper, better.”
Nowhere is this better illustrated than walking the halls of the
annual NAB Show. Every year, we are surrounded by disruptive technology.
Just off the top of my head, helicopters are replaced by drones, remote
trucks by Tricasters, film cameras by cell phones… the list is endless.
Change is a two-edged sword. Gear is more affordable, but as more
people own gear competition increases. An individual can be a “one-man
band,” but client expectations now expect feature film effects on beer
budgets. (Just because one person CAN do everything does not means that
they SHOULD.) It is easier than ever to achieve amazing quality, but
quality is devalued at the expense of meeting deadlines. Talent is
expected, disrespected and never properly compensated.
“Faster, smaller, cheaper” is both a prediction and a curse.
Five years ago, this was news. Today, it is woven into the fabric of
every day life. We can complain, but that won’t change anything; and it
probably won’t even make us feel better. So what do we do?
Over the next few months, I want to explore this subject in more
detail. If the world isn’t likely to change, that means we need to
change to keep up. But how should we evolve?
It seems to me we need to do four things:
Accept the situation
Focus on our strengths
Build on our network
Pick a new technology to learn and master it.
The first step to growth is accepting the present. It may be fun
to look back at the golden past, but that won’t help us today. The
skills of setting up a 2” quad videotape machine are as valuable today
as blacksmithing. Interesting to talk about, useless to make money on.
Next, stop thinking about all the stuff we are bad at and focus on
the stuff we are good at. I’ve learned, over the years, that each of us
tends to obsess about what we wish we could do well, and minimize what
we actually do well. I’ve always wanted to draw; but couldn’t. I’ve
always been able to explain technical subjects to large groups of
people, but never considered that much of an asset.
We need to focus on our strengths, because those are what potential clients will hire us for.
Next, continue expanding the network of people that know you. It is
said that the people we know gets us our next job. This sounds good, but
isn’t true. What is more accurate is to say that we will get our next
job from people who know who we are AND what we know. It isn’t enough
to know that Mary is a great person. That won’t get her a job. What we
need to know is that Mary is a great person AND she’s an After Effects
whiz. Potential clients need to know who we are and what we know.
Finally, today’s world is defined by technology. Pick a technology
you like – any technology – and master it. Become an expert. There is so
much to learn that no one person can know everything. By becoming
recognized as the leading expert in something, people will turn to you
when they have questions or projects.
Technology is in charge today and as technology changes, we need to
change with it. If we assume that our world will turn upside-down every
couple of years, we can plan for the transition and master it the next
time it comes around.
An
interesting thing happened on the way to Wrestlemania 31. Investors
jumped in the ring and got thrown to the mat as the company hit the top
of its popularity while, the very next day, Wall Street equity bottomed
out.
This year’s Wrestlemania was the highest grossing, most viewed and
most socially-centered event in WWE history. It generated gross
revenues of $12.6 million and set a record of 142 worldwide Twitter
trends, including 10 number one worldwide trends and exceeding any other
broadcast or cable show that night. Yet WWE stock took an equally
dramatic dive of 14.35% the day after, hitting bottom at $14, a 52%
plunge from its 52-week high.
What happened? The answer lies in the nature of emerging OTT
business models, specifically the relationship between subscription
based revenue streams and the leveraging of monumental “over-the-top”
events to sustain them.
Moving from a pay-per-view model, WWE launched its OTT network last
year, quickly expanding to 1.3 million subscribers. Chief Strategy and
Financial Officer George Barrios anticipates that within the next
several years, that total could balloon to 3 or 4 million. “We’ve got
about 100 million broadband homes and top 16 markets,” says Barrios.
“If we can get 3 to 4 percent penetration, we can transform the
company.”
However, a large portion of WWE’s success is based on monumental
events like Wrestlemania 31 and those events hold the seeds of dizzying
unrest. With a $9.99 subscription price tag, it is all too easy for
consumers to buy in before the event, then cancel their subscription
afterward once the excitement and engagement factor drops. And that’s
what has now given investors an aggravated case of the willies.
Is the subscriber base as real as wrestling itself? They want to
know. Only time will tell if it will reign like Andrei the Giant.
Meanwhile, several business concerns need to be ironed out. One of
these is the inherent danger of single-interest content. A network like
WWE only delivers content for those interested in wrestling. Thus, it
is heavily reliant on major events in that category to sustain and grow
interest. Between events, though, especially the monumental ones like
Wrestlemania, interest wanes which, in a subscription model, often
precipitates a sudden cancellation freefall.
There are two basic responses to this. First, develop more creative
and well-directed content. Second, alter the business model.
Doing the first is an interior effort, but requires strong exterior
insight. Developing the kind of content that will draw and keep a
subscriber base is predicated upon determining exactly what that
audience wants and what it responds to. For this, deep-seated analytics
are needed to dip into the billions of social messages traversing the
Internet daily.
As for the second response to the OTT business conundrum, there are
four basic monetization models: subscription, transaction, hybrid and
free. Right now, WWE is primarily dependent on the first. As OTT
matures, however, that dependency will have to lessen and the dials on
all four models manipulated to ensure the most efficacious mix for any
given interest segment.
WWE, for instance, may still want to offer their standard $9.99
monthly rate, but add a transactional fee on top for monumental events
like Wrestlemania. Or it may want to employ a multi-tiered approach
with basic, mid-level and premium packages aimed at a variety of fanbase
interests and preferences identified by increasingly sophisticated
analytics engines.
According to recent studies by IMS Research, “some of the main
drivers behind OTT revenue growth are service providers and content
providers’ willingness to explore new paradigms. Support for multiple
devices and platforms, widespread partnerships and acquisitions, and
global expansion of successful services are some of the vital factors
that will shape the market through the end of the decade.”
All of which means that the investors will ultimately get their legs underneath of them and spring up off of the mat, and networks like WWE will eventually hoist the victor’s belt.
For
decades, broadcast stations focused their monitoring efforts on the
content they aired via conventional distribution chains. More recently,
however, as the Internet has emerged as an increasingly viable and
attractive means of delivering content, many broadcasters have seized
the opportunity to extend their reach to a much greater geographic area
and a much larger potential audience.
While IP-based content delivery gives broadcasters the chance to
introduce revenue-generating over-the-top (OTT) and Internet streaming
services, it also demands a new approach to monitoring and ensuring
quality of experience (QoE) across the broadcaster’s full service
portfolio. Along with the significant new business opportunity presented
by OTT come significant new challenges in monitoring.
The launch of OTT services yields an immediate and significant
increase in the number of portals and devices on which audiences can
consume content. The astounding array of viewing devices — computers,
tablets, and smartphones — and associated “flavors” of content they
require can make it difficult for operations of any size to assure the
best possible experience. Given the highly competitive media marketplace
today, it is critical that broadcasters have the means to monitor their
OTT services and preserve the quality and value of the content they
provide. At the same time, still-difficult economic conditions in and
beyond the industry make it necessary to adopt affordable and highly
efficient solutions for monitoring OTT services.
Though a number of factors, such as the limitations of supporting
networks and devices which are simply beyond the broadcaster’s control,
it is possible for broadcasters to implement an effective monitoring
model that addresses their OTT services. Broadcasters are finding that
the simplest and least expensive way of establishing OTT service
monitoring is through the extension of their existing monitoring model
and infrastructure.
While monitoring of linear broadcasts typically has required
examination — monitoring and recording — of only a single output, the
monitoring of Internet-delivered content demands continual evaluation of
multiple new outputs, each of which may be distributed in dozens of
different versions. When content is being delivered via more than one
content delivery network (CDN), then the broadcaster may also need to
create additional versions tailored to each CDN’s preferred profile.
Rather than attempt to monitor all of these outputs and every version
of content all of the time, broadcasters are monitoring selectively at
ingest, encode, packaging, delivery, and the signal received by the
viewer. (This last is preformed, by necessity, in a controlled
environment, so it cannot account for the influence of ISP service
quality.)
At ingest, this means moving from examination of video itself to
examination of data about that video. By looking at bit rates, syntax,
reference timestamps, and alignment of each of the files, the
broadcaster can get a pretty good sense of a given file’s integrity.
More important, this process can be automated and accelerated across a
large number of files. The broadcaster can confirm correct packaging of
files for the appropriate CDN(s) and platforms can be confirmed
relatively simply by “accepting” those files (and the corresponding
manifest) in the same way that a CDN would.
Looking beyond the CDN, the broadcaster can actively sample a variety
of its outputs — with a focus on the most popular platforms and formats
— to get a sense of the service quality being experienced by the
majority of its OTT customers. This method can be used in combination
with “round robin” emulation in which the broadcaster alternatively
examines the lower, middle, and upper bit rates across Apple® HLS,
Microsoft® Smooth Streaming, and Adobe® HDS formats, all the while
monitoring content for quality.
This two-pronged approach to monitoring OTT service quality allows the broadcaster to detect issues that threaten the viewing experience and determine whether the cause stems from its own activities, or whether they are a consequence of factors that are further down the distribution chain and beyond control. Now that descriptive video and captioning are required in Internet-delivered versions of broadcast content, this expansion of monitoring also helps broadcasters to ensure that all their services are compliant with the latest FCC requirements. When equipped to monitor both quality and compliance up to and even beyond the CDN, broadcasters are well-positioned to capitalize on the many benefits of IP-based content delivery.
In
the already overhyped cloud market, hybrid cloud computing is emerging
as the next big thing. In many cases, it’s happening organically over
time. Even as enterprise-sized media operations are deploying private
clouds to increase agility and reduce costs for media workloads, they
are also steadily increasing the number of IT workloads being run in
public clouds. This is helping them in three key areas: test and
development, burst media processing (typically transcoding), and ongoing
growth of the business. IT managers have become unwitting
intermediaries between their companies and cloud service providers,
managing workloads and moving data among multiple clouds.
When the dust settles, the companies that have deployed a
well-thought hybrid cloud strategy will come out ahead, so it behooves
you to take a step back and consider all the variables. To get past the
current hype, you should assess how the cloud fits into your IT road
map, and determine which of its many forms makes sense (and which don’t)
for your operations. Some recommendations:
Consider data portability between workgroup storage, enterprise storage, enterprise private cloud storage, and public cloud back-up offerings.
The ability to manage data seamlessly in a hybrid cloud environment
can give you new degrees of freedom for balancing and fine-tuning data
services across cloud resources. Your operations teams already benefit
from managing virtual machines. Having a common set of storage services
will enable them to provision resources and manage data more effectively
across multiple cloud environments.
Look for a storage operating system that is designed to provide that
common data platform and serve as the data management foundation for a
hybrid cloud environment. A common set of storage services can address
one of the more complicated aspects of brokering services between
private and public cloud resources —data management — with a platform
that offers a common data format and familiar management among
resources. With such a platform in place, your organization can move
data around dynamically among cloud resources, creating innovative cloud
solutions and avoiding cloud-provider lock-in. Regardless of what
storage you go with for future on-premise needs, pick a storage vendor
with proven technologies that streamline data movement, making it easy
for you to move data from their storage systems to the cloud and back
again.
Design private cloud services with a hybrid future in mind. Make sure future integration/interoperability is possible.
The idea is to position the IT organization so that managers can
choose among cloud services, knowing that for whatever reason — change
in business needs, policies, location, etc. — they can make adjustments
with minimal pain and impact to the business.
In the rapidly evolving world of cloud computing and storage, you
have many opportunities to demonstrate leadership by developing a road
map that employs advanced cloud infrastructures to support business
objectives and carve out competitive advantage. Once you’ve assessed
your short- and long-term IT goals, you’ll need to evaluate competing
cloud offerings to determine which services are compatible with user
requirements, including such important criteria as latency, performance,
and SLAs. Finally, you should interview potential cloud providers,
check customer references, and negotiate business terms. By going
through these steps, you help ensure that the services you put in place
will be able to evolve and change shape in order to meet the needs at
hand. And just as importantly, you help ensure that all of the services
will work together seamlessly between on- and off-premise locations.
Embrace your role as a cloud services broker.
IT managers in larger media operations will gradually become “cloud
services brokers” that manage across private clouds within their own
data centers, among hyperscale cloud providers such as Amazon (AWS),
Google, and Microsoft, and across large public cloud service providers
such as Softlayer Rackspace, Orange Business Services, and Verizon.
Media companies will benefit because the competition among
market-leading cloud service providers is creating a broad range of new
options and price points for the delivery of IT and media services.
Think of cloud services as another technology. Just because the cloud
is disruptive doesn’t mean it is best for every application and use
case. Some applications and use cases would benefit greatly from a
“pay-as-you-go” approach and therefore should be considered for the
cloud earlier. In use cases that require a large amount of data (media),
however, the workflows might be too “heavy” to move easily to the
cloud. In those cases, it’s best to wait or take advantage of a hybrid
cloud model. After all, it isn’t just about where you store your media
and business data. It’s also about moving, migrating, and managing your
data. Those costs are actually greater than the purchase or rental of
the storage capacity itself.
Transitioning from managing workgroups to managing internal private
clouds to integrating with public clouds is not a simple process. As
cloud environments increasingly evolve into a blend of different types
with multiple vendors, your role as an IT cloud services broker —
whether intentional or accidental — will continue to grow in importance.
Yours is a challenging and rapidly evolving environment that will
require extensive IT experience, cloud knowledge, and well-honed
negotiating skills when you’re hammering out deals that include such
major issues as data control and management, consistent SLAs, and
security. You must learn to hide the complexities of hybrid clouds from
business units while supporting an agile, best-of-breed approach that
optimizes their investments in both private and public cloud resources.
Even as you’re considering all the variables, understand that this is an iterative process that will need to be repeated as needs evolve and new options become available. The payoff will be a more agile media organization that is well positioned to boost business performance to unprecedented levels.
As
we turn the corner into 2015, I want to highlight a once futuristic
technology that is already changing our lives in many positive ways:
computer vision—training computers to investigate images and understand
them the way the human brain does, but without human fatigue and at
massive scale. In years to come, this technology will continue to help
make our lives safer and easier.
People have long pointed cameras at a wide range of significant
events to document and monitor them. Traffic cameras record images of
routine moments like kids in crosswalks near schools, and cell phone
cameras record seminal events like political demonstrations in
oppressive nations. To date, we have captured more hours of video than
humans have time to ever process, and often, significant seconds of
footage sit on hard drives or get buried in mountains of visual memories
never to be seen by human eyes.
That’s where computer vision comes in. Computers can process and
reconstruct video to make those significant few moments visible to us.
They can also be trained to look for small anomalies or markers and
tirelessly scan through massive databases looking for the seconds we
want them to find. We’re just starting to see benefits from this
innovative and constantly evolving image processing technology.
Many organizations—big and small—are deploying computer vision to
accomplish commercial goals, like building self-driving cars, detecting
shifts in the earth’s environment, and helping lifesaving services get
to remote areas. Here are five ways computer vision is contributing to
better years to come.
1. Computer Vision in Cars
It started with warning systems in cars in the early 2000s: a loud
beep indicates we’re about to back into a garbage can, run up a curb or
nick a parked car. Since then, more and more vehicles have rear-facing
cameras to see objects before we get close enough to warrant the loud
beep. Most recently, automakers are installing sensor systems that
recognize images that are predictive of potential danger, like a car
stopping in front of our car. Once the danger is recognized, the system
automatically brakes to prevent accidents from happening. And in the
near future, innovative self-driving cars will use computer vision to
change the way we even think about transportation today. By these
vehicles will detect and process the thousands of images a “vehicle”
encounters en route to arrive at destinations safely and without
accidents, likely without a human involved at all.
2. Seeing the Scene 360
A decade ago, news and reports came to us from one—or two—reliable
sources that defined the “truth”. Information came from sensible
locations reporters could get to or happened to be at when something
significant took place. Now, with a cell phone tucked in pockets, almost
anyone can be a reporter. These cell phones capture endless stills and
video, creating millions of potential sources of news and “truth”. Among
those who have benefited from this plethora of video reports are
nonprofit agencies like Amnesty International.
Computer vision and image processing software help agencies like
Amnesty gather valuable video evidence collected by global affiliates to
track the humane treatment of people worldwide. These groups can better
fulfill their mission by gathering real—and often real-time—video,
which can be material that governments may not want us to see. Image
processing takes video evidence and helps truth seeking analysts extract
the precise, key information to combine and reveal a more comprehensive
vision of what’s happening. The result: a new side to the story
supported by irrefutable fact. We’ve seen this type of video
reconstruction on shows like “CSI”; it’s a technology used by law
enforcement everyday. Now nonprofit organizations use it to expose
injustice or motivate justice seekers to pitch in and help with dire
causes.
As we continue to see the evolution of true investigative,
crowd-based journalism, organizations can use innovative technology to
make better sense of video and images to create stories that account for
as many perspectives as possible. Used in this capacity, organizations
and governments have the potential to cultivate greater levels of
justice, safety and cultural competency.
3. Flying Kites Making Maps Better
Lately, I’ve been impressed with a couple of grass roots
organizations trying to update the world’s maps and help people in times
of trouble. Their efforts are aimed at mapping unmapped areas, such as
far-away refugee camps and remote areas hit by natural disasters. These
efforts make directing humanitarian services to people in distress much
easier, and the process helps provide more precise mapping information
to countries that don’t have immediate and reliable access to it.
OpenStreetMap.org is a crowd-sourced map of the world created by
volunteers doing exactly this: mapping unmapped areas. The data is free
to use under an open license and offers a collaborative vision of
ever-changing geographies. To get new aerial information about unmapped
areas, non-profits such as PublicLab.org are using cameras suspended in
kites to capture images and trace over them to define roads in
OpenStreetMap.
Computer vision and geospatial image processing is helping nonprofits
and their crowds of volunteers work more efficiently. Of course,
mapping is an enormous and ever-changing undertaking, but with image
processing, using open source visuals (from kites, drones, manned
aircraft), we can process billions of images to create more up-to-date
and accurate maps. New roads can be designed and traced, blocked roads
can be corrected, and other geographic obstacles can be updated and
disseminated quickly and with greater confidence and reliability.
4. Amber Alerts and Crowds of Eyeballs
When a child is missing, a crime is committed or a natural disaster
strikes, time is of the essence. Search efforts for any crisis need to
start immediately – the faster a person or moving vehicle can be
located, the better the outcome. If an Amber Alert is issued about a
late-model sedan that left a shopping center parking lot between 11:00
and 11:30 am and was subsequently spotted 8 blocks away heading south,
computer vision’s fast detection and tracking abilities can search
through traffic camera and aerial traffic footage to identify potential
vehicles (e.g., all cars that came from that parking lot and travelled
to the location of the second spotting) for human eyes to review. The
initial computer search significantly improves the efficiency for the
expert analysts who are tasked with reviewing potential vehicles. This
sort of “heavy lifting” that computers can accomplish to assist human
effort can be applied to a variety of searches where time is of the
essence and millions of images, or video time, can be eliminated in
order to get to the answer. (http://youtu.be/HnwDpg2vvdg)
5. Predictive Analytics
What if we could use computer vision to “predict” events that affect
public safety? If we know that the density and timing of traffic in one
intersection has lead to pedestrian accidents, can we analyze video
footage from similar intersections to predict which ones are more
dangerous than others? And, can we innovate alternative solutions by
changing the bus schedules or the number of seconds the “Cross” sign
flashes? Likewise, if merchants in the area of the annual New Year’s
celebration are upset about storefront access, parking, and traffic
snarls during the festivities, could we review the footage from past
years to identify the hot-spots and better address traffic flow?
We can start the exploration now: we have cameras in place. We have a
whole new generation of aerial cameras (satellites and drones) that can
provide data about almost any area. With traffic patterns, the bigger
the dataset the better. The possibilities for making accurate “big data”
predictions and finding solutions are endless. We can take video of a
dangerous crosswalk, see what the true data is, and make changes that
could potentially prevent loss of life.
We can make our crosswalks more secure, we can innovate safer cars, we can find a missing person faster, we can locate unmapped locations, and we can reformulate the value of video in every aspect of life. We are just beginning to understand the mighty effects of computer vision –to help make our lives better and more efficient. We can accomplish so many more extraordinary missions in our fast growing and fast moving world with this once futuristic technology. Now.
Despite discussion of audience fragmentation, which is occurring for
news, drama, comedies, and perhaps even movies, live events are in fact
attracting larger and larger audiences. For example, February’s NFL
Super Bowl XLIX is expected to have reached an average of 114.4 million
U.S. viewers, making it the most viewed Super Bowl game to date. The
steadily climbing number of viewers is not unique to NFL games, with
live events continuing to attract bigger and bigger audiences.
While the 2015 Super Bowl was primarily mainly watched by American
viewers, rotating host international sports competitions are also
frequently attracting larger international audiences, notably events
such as the Olympics and the World Cup. Going back to 2012, The London
2012 Olympics, for example, was viewed by an audience of more than 3.6
billion people worldwide. The 2012 London Olympics were also streamed
heavily online with official broadcasting partners being responsible for
1.9 billion online video views. The year 2012 also saw the introduction
of both live and on-demand Olympic Games content via an official
Olympics YouTube channel, which was streamed a total of 59.5 million
times, of which 34.5 million were live.
The 2014 Sochi Winter Olympics also saw an increase in viewership in
comparison to the previous Winter Olympics held in 2010 which acquired a
total of 1.8 billion viewers worldwide; the Sochi Winter Olympics were
able to draw 2.1 billion viewers. The Opening Ceremony of the games was
viewed by 45.8 million people within the host country, with over 18
million video views being generated from the online coverage of the
games. Within the United States over 199 million people watched at least
some of the 2014 Winter Olympics and NBC Sports Digital saw one game
(the men’s ice hockey semi-final between the United States and Canada)
generate 2.1 million “TV Everywhere” streams, which is believed to be
the largest verified “TV Everywhere” audience within the United States.
A Number of Factors
Key contributors to the increasing audience size are many, but include:
Competition with other programming has fallen. The ability to
record and “catch-up” with alternative programming that airs
simultaneously with live events means less competition for live
programming. Many different mechanisms are available, including DVRs,
operator VOD catalogs, and access via authenticated (TV Everywhere) apps
or PC platforms. Viewers are therefore more incentivized to watch these
live events when they happen without having to risk missing out on
other content that is more a part of their routine.
International syndication has risen. This year’s Super Bowl game was
broadcast alongside seven live foreign language broadcasts with
networks worldwide, making the game more inclusive. The languages
offered were Spanish, Hungarian, Japanese, French, Portuguese, Mandarin
Chinese, and German. Increasing focus on syndicating and promoting major
events worldwide has increased the globalization of content and
sports.
Multiscreen (Internet) syndication makes viewing more pervasive and
convenient. In addition to the live television broadcast, NBC made the
Super Bowl available to stream live on PCs and tablets. Verizon held the
exclusive rights to the mobile (smartphone) streams. At its peak, the
stream was viewed by 1.3 million viewers. Long, multi-stage and
especially multisport events like the Olympics benefit especially from
Internet syndication that allows fans to choose sports-centric viewing,
country-centric viewing, award-centric viewing, etc.
Multiscreen (Internet) syndication helps bridge the time barriers.
Multiscreen viewing can be especially helpful for users who wish to
access content across time zones; for example, kick-off for this year’s
Super Bowl was scheduled for 6:30 p.m., in Japan the 9-hour time
difference means that kick-off takes place just in time for the commute
to work on Monday morning. For those who don’t drive, the morning
commute can be used to view events such as these on mobile devices, or
PCs at work where televisions are not available.
Social media and international press exposure: Social media can also
play a part in making events such as these so popular, the increase in
social media conversation can lead consumers who might not have watched
to tune-in in order to avoid being left out. Social conversation can
also generate interest around live events from those who may otherwise
have had limited exposure to it: for example, the Super Bowl, while
primarily of interest within the United States, is becoming more popular
internationally. The 2014 Super Bowl was cause for 25.3 million tweets,
while official FIFA content reached 451 million Facebook users during
the 2014 World Cup.
Events Will Become the Only True “Live”
The ever-increasing audience accessing live broadcasts and streams
is, of course, of great benefit to broadcasters and is in demand by
advertisers wishing to increase their brand awareness. In addition to
the increased viewership and opportunity for ad views and the ability to
charge extremely high rates for ad slots, a study from Nielsen claims
that ads shown during the Super Bowl can provide opportunities for
repeat exposure that lead to memorability. NBC’s heavy promotion of The
Blacklist during the Super Bowl showed immediate gratification for its
heavy ad-based promotion. This follow-up content was estimated to have
26.5 million viewers, a major increase in this viewership of scripted
programming.
As history demonstrates, it is likely that the audiences for live
events will continue to grow. Global events now are able to draw
increasingly large live crowds, but for those who are unable to make the
travel to foreign countries to participate in these events as they
happen, the increasing availability of live content means that consumers
are far less likely to miss out.
From an industry perspective, virtualization is working itself into
the broadcast workflow. Major broadcast organizations are finding ways
to invest less in broadcast centers and have the overflow to the cloud
for their largest event capacity. Amazing cloud-based workflows have
been shown, such as EVS/Elemental discussing the World Cup production
flow while iStreamPlanet, Adobe, Akamai, Microsoft, and others have
shown workflows used in both the Olympics and the Super Bowl.
When you put together the supply (workflow) and demand (consumer) sides of the equation, we see significant elasticity in the number of available channels on any pay TV system. Drama and comedy channels will be personalized and delivered on-demand (integrating start-over, catch-up, and VOD seamlessly), news content will be delivered in a linear channel with personalization based on specific interests, and, yes, live events will appear at the forefront of our experiences and should receive significant promotion by content holders and distributors alike.