Subscribe via RSS Feed Connect on Google Plus Connect on Flickr

Tag: AI

Kersti Kaljulaid and Sophia talk weaponized AI

Friday, 16 February, 2018 0 Comments

The organizers of this year’s Munich Security Conference decided they’d try something novel for the pre-event titled “The Force Awakens: Artificial Intelligence & Modern Conflict”, so they put Sophia centre stage and had her do the introductions. Hanson Robotics, Sophia’s creator, describe her as their “most advanced robot” and for many last night this was their first opportunity to see a chatty bot in action.

The verdict? Unimpressive. The quality of Sophia’s audio output was sub-standard, but much worse was her language. The Munich Security Conference is an annual gathering of a global elite that’s comfortable with the global lingua franca but those in charge of Sophia’s speech rhythms ignored that fact that speed does not always equal progress. Her pace of delivery was way too fast for even most native speakers present. Earlier this week in the Financial Times, Michael Skapinker posited that “Europe speaks its own post-Brexit English” and he claimed that this so-called “Eurish” is a mix of “romance and Germanic influences — and no tricky metaphors”, but Sophia, clearly, does not read the FT and neither do those in charge of her interaction with the real world. Skapinker’s “Eurish” is mostly imaginary but chatbot programmers would do well to slow the pace of delivery, simplify the vocabulary and go easy with the metaphors.

That aside, the real star of the show was Kersti Kaljulaid, President of Estonia. Her English was perfectly attuned to the wavelength of the audience and her knowledge of both artificial intelligence and modern conflict was extraordinary. Then again, she would be familiar with both topics as Estonia is a leader in digital transformation and the 2007 Russian cyber-attack on Estonia was a sign of the dangerous new world we now share with the ruthless regimes in Moscow, Beijing and Teheran. Kersti Kaljulaid is on the front line and we are lucky that she understands the grave nature of the threats posed by AI in the hands of those who wish to destroy the civilization and the society she represents so eloquently and so knowledgeably.

Sophia


Insh-AI: WOTF on Ash Wednesday

Wednesday, 14 February, 2018 0 Comments

Back in November last year, Wired ran an article titled Inside The First Church of Artificial Intelligence. The writer, Mark Harris, introduced readers to Anthony Levandowski, the “unlikely prophet” of a new religion of artificial intelligence called Way of the Future (WOTF). Levandowski’s church, we learn, will focus on “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.”

Last Sunday in the Sunday Times, Niall Ferguson, Senior Fellow at the Hoover Institution at Stanford University, asked “Shall we begin to worship the machines — to propitiate them with prayers, or even sacrifices?” And, provocatively proposed: “Perhaps we shall need to devise an AI equivalent of “Inshallah” — Insh-AI, perhaps.” Ferguson’s syndicated column has the oddly banal title, The machines ate my homework, but it offers food for serious thought, especially today, Ash Wednesday. Ashes to ashes AI, he says, is all about “getting computers to think like a species that had evolved brains much bigger than humans — in other words, not like humans at all.” One consequence of this might be to “return humanity to the old world of mystery and magic. As machine learning steadily replaces human judgment, we shall find ourselves as baffled by events as our pre-modern forefathers were.”

What will become of us then? Will we, in despair, in hope, follow WOFT? Ferguson quotes the German sociologist Max Weber who argued that modernity replaced mystery with rationalism and as a result people “said goodbye to magic and entered an ‘iron cage’ of rationality and bureaucracy.” If AI leads to a re-mystification of the world and a revival of magical thinking, Ferguson knows what he’s going to do: “I’m staying put in Weber’s iron cage,” he says.

But you can’t put ashes on an AI and not everyone aspires to being caged.


The Magi for the Epiphany

Saturday, 6 January, 2018 1 Comment

Something unexpected took place in Bethlehem and the otherworldly magi, who “appear and disappear in the blue depth of the sky”, are doing their best to comprehend the incomprehensible. It’s a long way from Bethlehem to Bloomsbury, but that was where William Butler Yeats was living in 1914 when he wrote The Magi. In a mere eight lines, he follows the journey of the three wise men with “ancient faces” that resemble “rain-beaten stones”, who are forever watching and waiting, “all their eyes still fixed, hoping to find once more” the thing that will satisfy their search for meaning.

Is Yeats saying that the world has yet to discover the meaning of Christ’s brief time on earth? Is it so that we cannot be fulfilled until “the uncontrollable mystery” is decrypted? Today, the quest for the secret of “the uncontrollable mystery” is increasingly fervent. Anthony Levandowski, for example, is the “Dean” of a brand new Silicon Valley religion called Way of the Future that worships artificial intelligence.

The Magi

Now as at all times I can see in the mind’s eye,
In their stiff, painted clothes, the pale unsatisfied ones
Appear and disappear in the blue depths of the sky
With all their ancient faces like rain-beaten stones,
And all their helms of silver hovering side by side,
And all their eyes still fixed, hoping to find once more,
Being by Calvary’s turbulence unsatisfied,
The uncontrollable mystery on the bestial floor.

William Butler Yeats

Yeats uses a series of “s”-sounding words — stones, stiff, still, silver, side by side, unsatisfied — to paint a picture of the mysterious Magi, who wear “stiff, painted clothes” and “helms of silver”. His use of alliteration and repetition underpins the characteristics of the “unsatisfied ones”. On this Feast of the Epiphany, let us hope that they, and all of us, find some satisfaction this year.

The Sacred Heart Lamp


AI is the ‘New Electricity’

Friday, 17 November, 2017 0 Comments

Well, so says Andrew Ng, co-founder of Coursera and an adjunct Stanford professor who founded the Google Brain Deep Learning Project. He was delivering the keynote speech at the AI Frontiers conference that was held last weekend in Santa Clara in Silicon Valley.

“About 100 years ago, electricity transformed every major industry. AI has advanced to the point where it has the power to transform every major sector in coming years,” Ng said.

Right now, the sectors that are getting the AI “electricity” and making it part of their core activities include tech and telecoms companies, automakers and financial institutions. These are digitally mature industries that focus on innovation over cost savings. The slowest adopters of the new “electricity” are in health care, travel, professional services, education and construction.


Enter the data labelling professional

Thursday, 16 November, 2017 0 Comments

You hear the words “artificial intelligence” and what do you think of? Dystopia vs. Utopia. Stephen Hawking warning us to leave Earth and Elon Musk sounding the alarm about a Third World War. On the other hand, we have Bill Gates saying there’s no need to panic about machine learning and Mark Zuckerberg urging us to cool the fear-mongering. AI and apprehension and confusion go hand-in-hand today. The fear of a future unknown is combined a present dread that AI will take our jobs away, but every disruptive technology has seen the replacement of human workers. At the same time, we’ve been ingenious enough to develop new jobs and AI could be every bit as much a job generator as a job destroyer.

A recent report by Gartner predicts that while AI will eliminate 1.8 million jobs in the US, it will create 2.3 million jobs. The question is: Which kind of jobs will these be? Data scientists, with qualifications in mathematics and computer science, will be eagerly sought and highly paid, but what about the masses? Three words: Data Labelling Professional.

Imagine you want to get a machine to recognize expensive watches, and you have millions of images, some of which have expensive watches, some of which have cheap watches. You might need someone to train the machine to recognize images with expensive watches and ignore images without them. In other words, data labelling will be the curation of data, where people will take raw data, tidy it up and organize it for machines to process. In this way, data labelling could become an entry-level job or even a blue-collar job in the AI era. When data collection becomes pervasive in every industry, the market for data labelling professionals will boom. Take that, Stephen Hawking.

watches


Singularity approaching

Saturday, 4 November, 2017 0 Comments

What will life be like when the the predicted “singularity” arrives? Visual Suspect, a video production company based in Hong Kong, has come up with a depiction of what, for many, is a terrifying prospect. It’s terrifying because the “technological singularity” is the notion that artificial super-intelligence will trigger rampant technological growth, resulting in revolutionary changes to civilization.

The starting point for those wishing to learn about the singularity hypothesis remains The Singularity Is Near: When Humans Transcend Biology written by the futurist Ray Kurzweil and published in 2005. Three years later, Kurzweil and Peter Diamandis founded the Singularity University in California. It offers educational programs that focus on scientific progress and “exponential” technologies, especially AI.


Post from Mastersley Ferry-the Green

Friday, 21 July, 2017 0 Comments

Monday’s post here, The AI Apocalypse: Warning No. 702, was about artificial intelligence (AI) and Elon Musk’s alarming statement: “It is the biggest risk that we face as a civilization.” As we pointed out, fans of AI say such concerns are hasty.

Dan Hon is a fan of AI and he’s just trained a neural network to generate British placenames. How? Well, he gave his AI a list of real placenames and it then brainstormed new names based on the patterns it found in the training list. As Hon says, “the results were predictable.” Sample:

Mastersley Ferry-the Green
Borton Bittefell
Hisillise St Marsh
Westington-courding
Holtenham Stye’s Wood Icklets
West Waplest Latford
Fackle Village
Undwinton Woathiston
Thorton Stowin
Sketton Brittree
Ham’s Courd
Matton Oston


The AI Apocalypse: Warning No. 702

Monday, 17 July, 2017 0 Comments

Elon Musk has said it before and now he’s saying it again. We need to wise up to the dangers of artificial intelligence (AI). Speaking at the National Governors Association meeting in Rhode Island at the weekend, the CEO of Tesla and SpaceX said that AI will threaten all human jobs and could even start a war.

“It is the biggest risk that we face as a civilization,” he said.

Musk helped create OpenAI, a non-profit group dedicated to the safe development of the technology and he’s now urging that a regulatory agency be formed that will monitor AI developments and then put regulations in place. Fans of AI say such concerns are hasty, given its evolving state.

Note: Open AI and Google’s DeepMind released three papers last week — “Producing flexible behaviours in simulated environments” — highlighting an experimental machine learning system that used human teamwork to help an AI decide the best way to learn a new task. For one experiment, humans provided feedback to help a simulated robot learn how to do a backflip. The human input resulted in a successful backflip with under an hour of feedback, compared to the two hours of coding time an OpenAI researcher needed which, by the way, produced an inferior backflip to the human-trained one.

Is this important? Yes, because evidence is emerging that an AI can do some tasks better with human instruction — from cleaning someone’s home to learning a patient’s unique care needs. OpenAI hopes that if we can “train” AI to work closely with humans, we’ll be able to moderate some of the potential downsides of the technology. Like replacing journalists or starting a war.


WTF?

Monday, 3 July, 2017 0 Comments

That’s the question Tim O’Reilly asks in his new book, which will be on shelves in October. WTF? What’s the Future and Why It’s Up to Us comes at a time when we’re being told that 47 percent of human tasks, including many white collar jobs, could be eliminated by automation within the next 20 years. And we’ve all heard those stories about how self driving cars and trucks will put millions of middle-class men out of work. Tim O’Reilly does not shy away from these scenarios, which have the potential to wreck societies and economies, but he’s positive about the future. Sure, we could let the machines put us out of work, but that will only happen because of a failure of human imagination and a lack of will to make a better future. As a tech-optimist, who breathes the enthusiasm of Silicon Valley, Tim O’Reilly believes that what’s impossible today will become possible with the help of the technology we now fear.

The tech thing we’re most afraid of has a name. AI.

AI, Tim O’Reilly claims, has the potential to turbocharge the productivity of all industries. Already it’s being used to analyze millions of radiology scans at a level of resolution and precision impossible for humans. It’s also helping doctors to keep up with the tsunami of medical research in ways that can’t be managed by human practitioners.

Consider another of our coming challenges: cybersecurity. The purpose of the DARPA Cyber Grand Challenge was to encourage the development of AI to find and automatically patch software vulnerabilities that corporate IT teams cannot to keep up with. Given that an increasing number of cyberattacks are being automated, we will need machines to fight machines and that’s where Machine Learning can protect us.

Tim O’Reilly is a hugely successful entrepreneur, but he’s also a Valley idealist and he wants a future in which AI is not controlled by a few corporations but belongs to the Commons of mankind. For this to happens, he says we must embed ethics and security in the curriculum of every Computer Science course and every data training program. The rest of his ideas will be available in October when WTF? goes on sale.

WTF

Note: Just to show that life need not be taken too seriously, this site enables you to create your own O’Reilly book cover.


AI and terrific slang

Friday, 9 June, 2017 0 Comments

A lot of work on natural language processing involves designing algorithms that can understand meaning in written and spoken language and respond intelligently. Christopher Manning, Professor of Computer Science and Linguistics at Stanford University, relies on an offshoot of artificial intelligence known as “deep learning” to design algorithms that can teach themselves to understand meaning and adapt to new or evolving uses of language, such as slang. Andrew Myers spoke with Manning for Futurity about how AI (artificial intelligence) can teach itself slang. Snippet:

Can natural language processing adapt to new uses of language, to new words or to slang?

“That was one of the big limitations of early approaches. Words tend to pick up different usages and even meanings over time, often very remarkably. The world ‘terrific’ used to have a highly negative meaning — something that terrifies. Only recently has it become a positive term.

That’s one of the areas I’ve been involved in. Natural language processing, even if trained in the earlier meaning, can look at a word like ‘terrific’ and see the positive context. It picks up on those soft changes in meaning over time. It learns by examining language as it is used in the world.”


Unplugging the thinking toaster

Friday, 24 February, 2017 0 Comments

Given that robots and automation will likely lead to many people losing their jobs, how should we deal with this upheaval? For Microsoft co-founder Bill Gates, the answer is clear: tax the robots. In an interview with Quartz, Gates argues that taxing worker robots would offset job losses by funding training for work where humans are still needed, such as child and elderly care. Quote:

“And what the world wants is to take this opportunity to make all the goods and services we have today, and free up labor, let us do a better job of reaching out to the elderly, having smaller class sizes, helping kids with special needs. You know, all of those are things where human empathy and understanding are still very, very unique. And we still deal with an immense shortage of people to help out there.”

But, no taxation without representation, right? So, should the tax-paying robots have rights? What if progress in AI enables them to achieve consciousness in the future? When the machines are programmed to feel suffering and loss, will they be entitled to “humanoid rights” protection? Or should we prevent machines being programmed to suffer and therefore deny them rights, for the benefit of their human overlords? Here’s what Kurzgesagt, the Munich-based YouTube channel, says: