Subscribe via RSS Feed Connect on Google Plus Connect on Flickr

AI

Kersti Kaljulaid and Sophia talk weaponized AI

Friday, 16 February, 2018 0 Comments

The organizers of this year’s Munich Security Conference decided they’d try something novel for the pre-event titled “The Force Awakens: Artificial Intelligence & Modern Conflict”, so they put Sophia centre stage and had her do the introductions. Hanson Robotics, Sophia’s creator, describe her as their “most advanced robot” and for many last night this was their first opportunity to see a chatty bot in action.

The verdict? Unimpressive. The quality of Sophia’s audio output was sub-standard, but much worse was her language. The Munich Security Conference is an annual gathering of a global elite that’s comfortable with the global lingua franca but those in charge of Sophia’s speech rhythms ignored that fact that speed does not always equal progress. Her pace of delivery was way too fast for even most native speakers present. Earlier this week in the Financial Times, Michael Skapinker posited that “Europe speaks its own post-Brexit English” and he claimed that this so-called “Eurish” is a mix of “romance and Germanic influences — and no tricky metaphors”, but Sophia, clearly, does not read the FT and neither do those in charge of her interaction with the real world. Skapinker’s “Eurish” is mostly imaginary but chatbot programmers would do well to slow the pace of delivery, simplify the vocabulary and go easy with the metaphors.

That aside, the real star of the show was Kersti Kaljulaid, President of Estonia. Her English was perfectly attuned to the wavelength of the audience and her knowledge of both artificial intelligence and modern conflict was extraordinary. Then again, she would be familiar with both topics as Estonia is a leader in digital transformation and the 2007 Russian cyber-attack on Estonia was a sign of the dangerous new world we now share with the ruthless regimes in Moscow, Beijing and Teheran. Kersti Kaljulaid is on the front line and we are lucky that she understands the grave nature of the threats posed by AI in the hands of those who wish to destroy the civilization and the society she represents so eloquently and so knowledgeably.

Sophia


Insh-AI: WOTF on Ash Wednesday

Wednesday, 14 February, 2018 0 Comments

Back in November last year, Wired ran an article titled Inside The First Church of Artificial Intelligence. The writer, Mark Harris, introduced readers to Anthony Levandowski, the “unlikely prophet” of a new religion of artificial intelligence called Way of the Future (WOTF). Levandowski’s church, we learn, will focus on “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.”

Last Sunday in the Sunday Times, Niall Ferguson, Senior Fellow at the Hoover Institution at Stanford University, asked “Shall we begin to worship the machines — to propitiate them with prayers, or even sacrifices?” And, provocatively proposed: “Perhaps we shall need to devise an AI equivalent of “Inshallah” — Insh-AI, perhaps.” Ferguson’s syndicated column has the oddly banal title, The machines ate my homework, but it offers food for serious thought, especially today, Ash Wednesday. Ashes to ashes AI, he says, is all about “getting computers to think like a species that had evolved brains much bigger than humans — in other words, not like humans at all.” One consequence of this might be to “return humanity to the old world of mystery and magic. As machine learning steadily replaces human judgment, we shall find ourselves as baffled by events as our pre-modern forefathers were.”

What will become of us then? Will we, in despair, in hope, follow WOFT? Ferguson quotes the German sociologist Max Weber who argued that modernity replaced mystery with rationalism and as a result people “said goodbye to magic and entered an ‘iron cage’ of rationality and bureaucracy.” If AI leads to a re-mystification of the world and a revival of magical thinking, Ferguson knows what he’s going to do: “I’m staying put in Weber’s iron cage,” he says.

But you can’t put ashes on an AI and not everyone aspires to being caged.


AI is the ‘New Electricity’

Friday, 17 November, 2017 0 Comments

Well, so says Andrew Ng, co-founder of Coursera and an adjunct Stanford professor who founded the Google Brain Deep Learning Project. He was delivering the keynote speech at the AI Frontiers conference that was held last weekend in Santa Clara in Silicon Valley.

“About 100 years ago, electricity transformed every major industry. AI has advanced to the point where it has the power to transform every major sector in coming years,” Ng said.

Right now, the sectors that are getting the AI “electricity” and making it part of their core activities include tech and telecoms companies, automakers and financial institutions. These are digitally mature industries that focus on innovation over cost savings. The slowest adopters of the new “electricity” are in health care, travel, professional services, education and construction.


Enter the data labelling professional

Thursday, 16 November, 2017 0 Comments

You hear the words “artificial intelligence” and what do you think of? Dystopia vs. Utopia. Stephen Hawking warning us to leave Earth and Elon Musk sounding the alarm about a Third World War. On the other hand, we have Bill Gates saying there’s no need to panic about machine learning and Mark Zuckerberg urging us to cool the fear-mongering. AI and apprehension and confusion go hand-in-hand today. The fear of a future unknown is combined a present dread that AI will take our jobs away, but every disruptive technology has seen the replacement of human workers. At the same time, we’ve been ingenious enough to develop new jobs and AI could be every bit as much a job generator as a job destroyer.

A recent report by Gartner predicts that while AI will eliminate 1.8 million jobs in the US, it will create 2.3 million jobs. The question is: Which kind of jobs will these be? Data scientists, with qualifications in mathematics and computer science, will be eagerly sought and highly paid, but what about the masses? Three words: Data Labelling Professional.

Imagine you want to get a machine to recognize expensive watches, and you have millions of images, some of which have expensive watches, some of which have cheap watches. You might need someone to train the machine to recognize images with expensive watches and ignore images without them. In other words, data labelling will be the curation of data, where people will take raw data, tidy it up and organize it for machines to process. In this way, data labelling could become an entry-level job or even a blue-collar job in the AI era. When data collection becomes pervasive in every industry, the market for data labelling professionals will boom. Take that, Stephen Hawking.

watches


Singularity approaching

Saturday, 4 November, 2017 0 Comments

What will life be like when the the predicted “singularity” arrives? Visual Suspect, a video production company based in Hong Kong, has come up with a depiction of what, for many, is a terrifying prospect. It’s terrifying because the “technological singularity” is the notion that artificial super-intelligence will trigger rampant technological growth, resulting in revolutionary changes to civilization.

The starting point for those wishing to learn about the singularity hypothesis remains The Singularity Is Near: When Humans Transcend Biology written by the futurist Ray Kurzweil and published in 2005. Three years later, Kurzweil and Peter Diamandis founded the Singularity University in California. It offers educational programs that focus on scientific progress and “exponential” technologies, especially AI.


The daily wisdom of Google

Thursday, 26 October, 2017 0 Comments

Back in 1,400 BC, the Oracle of Delphi was the most important shrine in ancient Greece. Built around a sacred spring, it was considered to be the omphalos (the centre of the world) and people came from all over Greece and beyond to have their questions answered by the Pythia, the priestess of Apollo. Her cryptic answers could determine the course of everything from when a farmer planted crops, to when an empire declared war. Today, Google has replaced Delphi as the omphalos and people from all over Greece and beyond can have their questions answered by the algorithm. Example:

Google oracle

History: For centuries, scholars congregated at Delphi, and it became a focal point for intellectual inquiry as well as a meeting place where rivals could negotiate. The party ended, as it were, in the 4th century AD when a newly Christian Rome proscribed the prophesying. Then, on 4 September 1998, Google was founded in California.


A Dyson EV is on the horizon

Monday, 2 October, 2017 0 Comments

“It has remained my ambition to find a solution to the global problem of air pollution. At this moment, we finally have the opportunity to bring all our technologies together in a single product. So I wanted you to hear it directly from me: Dyson has begun work on a battery electric vehicle, due to be launched by 2020.”

So wrote Sir James Dyson to his employees last week. It was news, but not a surprise. In October 2015, Dyson bought solid-state battery company, Sakti3, for $90 million, which he said had “developed a breakthrough in battery technology.” That Dyson is working on an electric vehicle has been apparent in its recent hiring: executives from Aston Martin and Tesla are among those headhunted. Dyson says it has a team of “over 400 strong” on the project and it plans to invest more than £2 billion in the venture. The vehicle is set to hit the road in 2020, and Asia will be a key market. The company’s decision to open a tech centre in Singapore this year with a focus on R&D in AI is part of a greater global strategy.

Founded in 1987, Dyson is best known for its home appliances, including its bagless vacuum cleaners, fans, heaters and a hair dryer and the company’s revenue reportedly hit £2.5 billion last year. Because most Dyson devices use small, efficient electric motors, the company sees itself as an electric motor company, not a vacuum cleaner company, and electric vehicles are very much about motors.

Writing about Sir James and his dreams, Jack Stewart noted yesterday in Wired: “He could enforce Britain’s strong tradition of producing boutique automakers, the likes of Aston Martin, Lotus, TVR, MG, and Caterham.”

Dyson


Post from Mastersley Ferry-the Green

Friday, 21 July, 2017 0 Comments

Monday’s post here, The AI Apocalypse: Warning No. 702, was about artificial intelligence (AI) and Elon Musk’s alarming statement: “It is the biggest risk that we face as a civilization.” As we pointed out, fans of AI say such concerns are hasty.

Dan Hon is a fan of AI and he’s just trained a neural network to generate British placenames. How? Well, he gave his AI a list of real placenames and it then brainstormed new names based on the patterns it found in the training list. As Hon says, “the results were predictable.” Sample:

Mastersley Ferry-the Green
Borton Bittefell
Hisillise St Marsh
Westington-courding
Holtenham Stye’s Wood Icklets
West Waplest Latford
Fackle Village
Undwinton Woathiston
Thorton Stowin
Sketton Brittree
Ham’s Courd
Matton Oston


The AI Apocalypse: Warning No. 702

Monday, 17 July, 2017 0 Comments

Elon Musk has said it before and now he’s saying it again. We need to wise up to the dangers of artificial intelligence (AI). Speaking at the National Governors Association meeting in Rhode Island at the weekend, the CEO of Tesla and SpaceX said that AI will threaten all human jobs and could even start a war.

“It is the biggest risk that we face as a civilization,” he said.

Musk helped create OpenAI, a non-profit group dedicated to the safe development of the technology and he’s now urging that a regulatory agency be formed that will monitor AI developments and then put regulations in place. Fans of AI say such concerns are hasty, given its evolving state.

Note: Open AI and Google’s DeepMind released three papers last week — “Producing flexible behaviours in simulated environments” — highlighting an experimental machine learning system that used human teamwork to help an AI decide the best way to learn a new task. For one experiment, humans provided feedback to help a simulated robot learn how to do a backflip. The human input resulted in a successful backflip with under an hour of feedback, compared to the two hours of coding time an OpenAI researcher needed which, by the way, produced an inferior backflip to the human-trained one.

Is this important? Yes, because evidence is emerging that an AI can do some tasks better with human instruction — from cleaning someone’s home to learning a patient’s unique care needs. OpenAI hopes that if we can “train” AI to work closely with humans, we’ll be able to moderate some of the potential downsides of the technology. Like replacing journalists or starting a war.


WTF?

Monday, 3 July, 2017 0 Comments

That’s the question Tim O’Reilly asks in his new book, which will be on shelves in October. WTF? What’s the Future and Why It’s Up to Us comes at a time when we’re being told that 47 percent of human tasks, including many white collar jobs, could be eliminated by automation within the next 20 years. And we’ve all heard those stories about how self driving cars and trucks will put millions of middle-class men out of work. Tim O’Reilly does not shy away from these scenarios, which have the potential to wreck societies and economies, but he’s positive about the future. Sure, we could let the machines put us out of work, but that will only happen because of a failure of human imagination and a lack of will to make a better future. As a tech-optimist, who breathes the enthusiasm of Silicon Valley, Tim O’Reilly believes that what’s impossible today will become possible with the help of the technology we now fear.

The tech thing we’re most afraid of has a name. AI.

AI, Tim O’Reilly claims, has the potential to turbocharge the productivity of all industries. Already it’s being used to analyze millions of radiology scans at a level of resolution and precision impossible for humans. It’s also helping doctors to keep up with the tsunami of medical research in ways that can’t be managed by human practitioners.

Consider another of our coming challenges: cybersecurity. The purpose of the DARPA Cyber Grand Challenge was to encourage the development of AI to find and automatically patch software vulnerabilities that corporate IT teams cannot to keep up with. Given that an increasing number of cyberattacks are being automated, we will need machines to fight machines and that’s where Machine Learning can protect us.

Tim O’Reilly is a hugely successful entrepreneur, but he’s also a Valley idealist and he wants a future in which AI is not controlled by a few corporations but belongs to the Commons of mankind. For this to happens, he says we must embed ethics and security in the curriculum of every Computer Science course and every data training program. The rest of his ideas will be available in October when WTF? goes on sale.

WTF

Note: Just to show that life need not be taken too seriously, this site enables you to create your own O’Reilly book cover.


Unplugging the thinking toaster

Friday, 24 February, 2017 0 Comments

Given that robots and automation will likely lead to many people losing their jobs, how should we deal with this upheaval? For Microsoft co-founder Bill Gates, the answer is clear: tax the robots. In an interview with Quartz, Gates argues that taxing worker robots would offset job losses by funding training for work where humans are still needed, such as child and elderly care. Quote:

“And what the world wants is to take this opportunity to make all the goods and services we have today, and free up labor, let us do a better job of reaching out to the elderly, having smaller class sizes, helping kids with special needs. You know, all of those are things where human empathy and understanding are still very, very unique. And we still deal with an immense shortage of people to help out there.”

But, no taxation without representation, right? So, should the tax-paying robots have rights? What if progress in AI enables them to achieve consciousness in the future? When the machines are programmed to feel suffering and loss, will they be entitled to “humanoid rights” protection? Or should we prevent machines being programmed to suffer and therefore deny them rights, for the benefit of their human overlords? Here’s what Kurzgesagt, the Munich-based YouTube channel, says: