Three to five links every weekday from the worlds of entertainment, technology and design
If we think of the world as a book, then augmented reality is the digital magnifying glass that enables us to explore the details behind every word, letter, and punctuation mark — right down to the granular texture of the page itself. AR layers context onto an interface that you can see and understand. It blends two different realms, the real, physical world you see with your eyes and the world you see on your device in an interface that creates a new layer that’s as familiar as the phone in your hand.
When married to image recognition and machine vision, AR opens a whole new dimension of possibilities. Museums specifically are pushing the envelope with AR and showcasing the technology’s potential through creative implementation. They’re using AR for everything from wayfinding to bringing objects to life to developing entirely new, digital artworks. When you layer contextual information on top of objects, products, or places, you end up with a seamless, magical experience and the cultural sector is proving the limitless possibilities.
The BBC has warned that it risks being squeezed out of an “ever more competitive global market” by the likes of Netflix and Amazon as well as the Hollywood studios unless it find new ways to bolster its income. This comes as it publishes its Annual Plan, a report laying out its programming strategy for the forthcoming year including plans to launch 15 new dramas on flagship channel BBC One and bolster comedy on BBC Three.
Built as a show piece, the car features an electric powertrain in a restored 1959 Mini Cooper. Of course it’s red with a white stripe, and, of course, there are rally lights across the grill. This is how a Mini should look, and an electric powertrain should make it feel the part, too. Minis are supposed to be oversized go-karts that go like mad with near-instant acceleration. And that’s the best part of electric vehicles: instant torque that produces insane acceleration.
Google researchers have found ways to make machine-generated speech sound more natural to humans, members of Google’s Brain and Machine Perception teams said today in a blog post that included samples of the more expressive voices. Earlier today, Google announced the beta release of its Cloud Text-to-Speech services to provide customers with the same speech synthesis used by Google Assistant. Google’s Cloud Text-to-Speech is powered by DeepMind’s WaveNet, which can also be used to generate natural-sounding voices.