Introduced at CES earlier this year, Toyota’s Concept-i offered a glimpse of how artificially intelligent vehicles might interact with their users. The company has now added a couple of new concepts to this forward-thinking lineup that cater to less mobile folks, with both to be unveiled at the Tokyo Motor Show later this month.If the futuristic and eye-catching exterior didn’t give it away, then the user interface inside might. Toyota sees the Concept-i not as a car for today, but as a vehicle for how AI can be developed to make for new and improved driver experiences in the future. A heads-up display spans the width of the windshield, while the onboard AI monitors the driver’s mood and alertness and can learn to automatically switch between manual and automated driving modes.
This process could start with science fiction-level leaps in virtual reality (VR) technology. He predicts VR will advance so much that physical workplaces will become a thing of the past. Within a few decades, our commutes could just become a matter of strapping on a headset.As Inverse points out, this paradigm shift could have some interesting consequences. Without the need for people to live close to work, we could see unprecedented levels of deurbanization. People will no longer need to flock to large cities for work or be tethered to a specific location. Inverse suggests that this decentralization may decrease the opportunity for terrorist attacks. Blockchain technology will continue to bolster decentralization as well.
Sophia was appearing at a UN event called ‘The future of everything – sustainable development in the age of rapid technological change’.According to Hanson robotics, Sophia was designed to look like Audrey Hepburn with classic beauty including ‘porcelain skin, a slender nose, high cheekbones, an intriguing smile, and deeply expressive eyes that seem to change color with the light.’Creator David Hanson set out to make ‘genius machines that are smarter than humans and can learn creativity, empathy and compassion’.Earlier this year Sophia appeared on Good Morning Britain with Piers Morgan and Susanna Reid.During her appearance, the bizarre robot told presenters she thought Britain was ‘brilliant’ and said ‘I love your posh English accent. It really has a nice ring’.
Another key part of autonomous technology is artificial intelligence (AI). AI may put the wind up the scaremongers, but if we consider AI to be merely, as Dr Swash puts it, “a way to mimic what or how a human thinks or does”, then one can see how this mindset places AI into (literally) the driving seat of autonomous devices.DIDRIVERS have produced a scalable platform on which commercially-viable autonomous products can be created – and the company has already started. The company’s DIDRIVERS Robot Operating System (DROS) allows the modular building of autonomous capability into a wide range of devices. To date, these have included real-world, commercially available self-driving freight trucks, vans, fire-engines, public transportation vehicles and even luggage carriers moving around airports for the frequent flyer’s convenience
2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence. I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created.
A study published Saturday showed Google’s artificial intelligence technology scored best out of 50 systems that Chinese researchers tested against an AI scale they created, CNBC reported Monday. With a IQ score of 47.28, Google’s AI was almost twice as smart as Apple virtual assistant Siri, which scored 23.94.
Microsoft’s first mission statement envisioned a computer on every desk and in every home, but Bill Gates also had another goal: that computers would someday be able to see, hear, communicate and understand humans and their environment.More than 25 years and two CEOs later, Microsoft is betting its future on it.“We truly believe AI is this disruptive force, even though it’s not new,” said Harry Shum, the executive vice president in charge of Microsoft’s AI and Research group, in an interview with GeekWire. “The recent progress is just enormous. We certainly have seen that through our own products and engagement with customers. We also feel we have a very strong point of view about how we take AI to the next step.”Microsoft CEO Satya Nadella formed the Microsoft AI and Research group one year ago this month as a fourth engineering division at the company, alongside the Office, Windows and Cloud & Enterprise divisions. The move reflects Nadella’s belief in “democratizing AI,” making it available to any person or company, and radically changing the way computers interact with and work on behalf of humans.One way to measure Microsoft’s AI bet: In its first year of operation, the AI and Research group has grown by 60 percent — from 5,000 people originally to nearly 8,000 people today — through hiring and acquisitions, and by bringing aboard additional teams from other parts of the company.