German automaker BMW and Chinese internet giant Baidu will end their joint research on self-driving cars, executives for the two firms said on Friday, with Baidu now searching for new global research partners.Wang Jing, the head of autonomous car development at Baidu BIDU -0.82% , told Reuters the company was now using cars from Ford’s Lincoln in its U.S. testing, declining to elaborate.
Autonomy is coming to warfare, and some would say it’s already here. Weapon systems driven by artificial intelligence (AI) algorithms will soon be making potentially deadly decisions on the battlefield. This transition is not theoretical. The immense capability of large numbers of autonomous systems represents a revolution in warfare that no country can ignore.
The group urged the NHTSA to amend standards “to provide for the self-certification of vehicles that would allow fully self-driving operation without the presence of, or capability to use, human operator controls” such as steering wheels or brake pedals.
Researchers have developed an AI system testing a small subset of pictures and was able to predict criminality with an 89.5 percent accuracy.Such technology brings up some ethical and legal questions about its application, joining a long list of concerns AI raises in general.
For the next tests, the researchers put a robot called iCub through the same paces. This little guy is an “embodied neural network” – that is, an artificial intelligence system with deep learning capabilities, wrapped in a robotic body with the proportions of a toddler.
A team of researchers at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) have created a deep-learning algorithm that is able to generate its own videos and predict the future of a video based on a single frame.As detailed in a paper to be presented next week at the Conference on Neural Information Processing Systems in Barcelona, the CSAIL team trained their algorithm by having it watch 2 million videos which would last for over a year if played back to back. These videos consisted of banal moments in day to day life to better accustom the machine to normal human interactions. Importantly, these videos were found “in the wild,” meaning they were unlabeled and thus didn’t offer the algorithm any clues as to what was happening in the video.