Subscribe via RSS Feed Connect on Google Plus Connect on Flickr

Tag: AI

Post from Mastersley Ferry-the Green

Friday, 21 July, 2017 0 Comments

Monday’s post here, The AI Apocalypse: Warning No. 702, was about artificial intelligence (AI) and Elon Musk’s alarming statement: “It is the biggest risk that we face as a civilization.” As we pointed out, fans of AI say such concerns are hasty.

Dan Hon is a fan of AI and he’s just trained a neural network to generate British placenames. How? Well, he gave his AI a list of real placenames and it then brainstormed new names based on the patterns it found in the training list. As Hon says, “the results were predictable.” Sample:

Mastersley Ferry-the Green
Borton Bittefell
Hisillise St Marsh
Westington-courding
Holtenham Stye’s Wood Icklets
West Waplest Latford
Fackle Village
Undwinton Woathiston
Thorton Stowin
Sketton Brittree
Ham’s Courd
Matton Oston


The AI Apocalypse: Warning No. 702

Monday, 17 July, 2017 0 Comments

Elon Musk has said it before and now he’s saying it again. We need to wise up to the dangers of artificial intelligence (AI). Speaking at the National Governors Association meeting in Rhode Island at the weekend, the CEO of Tesla and SpaceX said that AI will threaten all human jobs and could even start a war.

“It is the biggest risk that we face as a civilization,” he said.

Musk helped create OpenAI, a non-profit group dedicated to the safe development of the technology and he’s now urging that a regulatory agency be formed that will monitor AI developments and then put regulations in place. Fans of AI say such concerns are hasty, given its evolving state.

Note: Open AI and Google’s DeepMind released three papers last week — “Producing flexible behaviours in simulated environments” — highlighting an experimental machine learning system that used human teamwork to help an AI decide the best way to learn a new task. For one experiment, humans provided feedback to help a simulated robot learn how to do a backflip. The human input resulted in a successful backflip with under an hour of feedback, compared to the two hours of coding time an OpenAI researcher needed which, by the way, produced an inferior backflip to the human-trained one.

Is this important? Yes, because evidence is emerging that an AI can do some tasks better with human instruction — from cleaning someone’s home to learning a patient’s unique care needs. OpenAI hopes that if we can “train” AI to work closely with humans, we’ll be able to moderate some of the potential downsides of the technology. Like replacing journalists or starting a war.


WTF?

Monday, 3 July, 2017 0 Comments

That’s the question Tim O’Reilly asks in his new book, which will be on shelves in October. WTF? What’s the Future and Why It’s Up to Us comes at a time when we’re being told that 47 percent of human tasks, including many white collar jobs, could be eliminated by automation within the next 20 years. And we’ve all heard those stories about how self driving cars and trucks will put millions of middle-class men out of work. Tim O’Reilly does not shy away from these scenarios, which have the potential to wreck societies and economies, but he’s positive about the future. Sure, we could let the machines put us out of work, but that will only happen because of a failure of human imagination and a lack of will to make a better future. As a tech-optimist, who breathes the enthusiasm of Silicon Valley, Tim O’Reilly believes that what’s impossible today will become possible with the help of the technology we now fear.

The tech thing we’re most afraid of has a name. AI.

AI, Tim O’Reilly claims, has the potential to turbocharge the productivity of all industries. Already it’s being used to analyze millions of radiology scans at a level of resolution and precision impossible for humans. It’s also helping doctors to keep up with the tsunami of medical research in ways that can’t be managed by human practitioners.

Consider another of our coming challenges: cybersecurity. The purpose of the DARPA Cyber Grand Challenge was to encourage the development of AI to find and automatically patch software vulnerabilities that corporate IT teams cannot to keep up with. Given that an increasing number of cyberattacks are being automated, we will need machines to fight machines and that’s where Machine Learning can protect us.

Tim O’Reilly is a hugely successful entrepreneur, but he’s also a Valley idealist and he wants a future in which AI is not controlled by a few corporations but belongs to the Commons of mankind. For this to happens, he says we must embed ethics and security in the curriculum of every Computer Science course and every data training program. The rest of his ideas will be available in October when WTF? goes on sale.

WTF

Note: Just to show that life need not be taken too seriously, this site enables you to create your own O’Reilly book cover.


AI and terrific slang

Friday, 9 June, 2017 0 Comments

A lot of work on natural language processing involves designing algorithms that can understand meaning in written and spoken language and respond intelligently. Christopher Manning, Professor of Computer Science and Linguistics at Stanford University, relies on an offshoot of artificial intelligence known as “deep learning” to design algorithms that can teach themselves to understand meaning and adapt to new or evolving uses of language, such as slang. Andrew Myers spoke with Manning for Futurity about how AI (artificial intelligence) can teach itself slang. Snippet:

Can natural language processing adapt to new uses of language, to new words or to slang?

“That was one of the big limitations of early approaches. Words tend to pick up different usages and even meanings over time, often very remarkably. The world ‘terrific’ used to have a highly negative meaning — something that terrifies. Only recently has it become a positive term.

That’s one of the areas I’ve been involved in. Natural language processing, even if trained in the earlier meaning, can look at a word like ‘terrific’ and see the positive context. It picks up on those soft changes in meaning over time. It learns by examining language as it is used in the world.”


Unplugging the thinking toaster

Friday, 24 February, 2017 0 Comments

Given that robots and automation will likely lead to many people losing their jobs, how should we deal with this upheaval? For Microsoft co-founder Bill Gates, the answer is clear: tax the robots. In an interview with Quartz, Gates argues that taxing worker robots would offset job losses by funding training for work where humans are still needed, such as child and elderly care. Quote:

“And what the world wants is to take this opportunity to make all the goods and services we have today, and free up labor, let us do a better job of reaching out to the elderly, having smaller class sizes, helping kids with special needs. You know, all of those are things where human empathy and understanding are still very, very unique. And we still deal with an immense shortage of people to help out there.”

But, no taxation without representation, right? So, should the tax-paying robots have rights? What if progress in AI enables them to achieve consciousness in the future? When the machines are programmed to feel suffering and loss, will they be entitled to “humanoid rights” protection? Or should we prevent machines being programmed to suffer and therefore deny them rights, for the benefit of their human overlords? Here’s what Kurzgesagt, the Munich-based YouTube channel, says:


@HumanVsMachine

Monday, 13 February, 2017 0 Comments

Hardly a week goes by without some “expert” or other predicting that by, say, 2020, millions and millions of jobs will be lost in developed economies due to robotics, AI, cloud computing, 3D printing, machine learning and related technologies. Hardest hit will be people doing office and factory work, but other sectors, from trucking to healthcare, will be affected “going forward,” as lovers of business cliché love to say.

The Twitter feed @HumanVSMachine features images showing the increasing automation of work. The footage of people doing a job side-by-side with videos of robots doing the same thing suggests a sombre future of post-human work.

Philippe Chabot from Montreal is the human behind @HumanVSMachine. He was a graphic artist in the video industry and he had plenty of work, once upon a time. But companies began outsourcing their artwork and Chabot found himself competing a globalized market where rivals can create a logo for $5 and software automatically designs avatars. Today, Philippe Chabot works in a restaurant kitchen and he feeds @HumanVSMachine in his free time.

Note: This image of “Robot Baby Feeder; Robot, baby bottle, crib, toy” by Philipp Schmitt is included in the “Hello, Robot. Design between Human and Machine” exhibition at the Vitra Design Museum in Weil am Rhein in Germany.

Raising robot


Learning Machine Learning

Sunday, 5 February, 2017 0 Comments

True story: A player named Libratus sat down at a poker table in a high-stakes game of no-limit Texas Hold’em. The gruelling 20-day tournament ended a week ago in a dramatic victory for Libratus over four of the world’s top players. Libratus is no cigar-smoking dandy cowboy, however. It’s an artificial intelligence (AI).

Machines are getting smarter, and AI is entering society in all kinds of intriguing and disturbing way. But who creates these machine-learning programs and who writes the algorithms that produce everything from stock market predictions to data journalism to poker-winning strategies? It’s time we found out and it’s time to learn how to do it ourselves. But how and where and when?

The ScienceAlert Academy is offering a 73.5-hour course titled “The Complete Machine Learning Bundle” for $39. This is the kind of immersion in the stuff you’ll need to plan a career or take your hobby to the next level. The package contains 10 different courses, including “Hadoop & MapReduce for Big Data Problems” and “From 0 to 1: Learn Python Programming – Easy as Pie”.

AI


Time well spent

Tuesday, 29 November, 2016 0 Comments

Award-winning storyteller, filmmaker, poet and media strategist Max Stossel is concerned about where the Attention Economy is headed. Exhausted by endless screen-time, he’s calling for a debate about “values”. Max asks: “What if news & media companies were creating content that enriched our lives, vs. catering to our most base instincts for clicks? What if social platforms were designed to help us create our ideal social lives, instead of to maximize ‘likes’? What if dating apps measured success in how well they helped us find what we’re looking for instead of in # of swipes?”

With AR, VR and AI about to radically change the ways in which we’re entertained, Max Stossel and friends have started Time Well Spent to “align technology with our humanity.” Max and his pals mean well, no doubt, but they’ve helped create the problem and one more website, no matter how clever the title or noble goal, is just another website. Still, that dancing panda is so cute, isn’t it?


Talkin’ Industry 4.0

Saturday, 5 November, 2016 0 Comments

Today, at the 29th IATEFL BESIG Annual Conference in Munich, I’ll be talking about the language of the Fourth Industrial Revolution and its seven key components: Industry 4.0, IoT, Big Data, cloud computing, robotics, AI and cybersecurity.

As with the three preceding Industrial Revolutions, which were powered, respectively, by steam, electricity and transistors, the cyber-physical systems now driving this fourth upheaval will transform manufacturing and replace William Blake’s vision of dark Satanic sweatshops with that of a better, cleaner, cleverer place — the smart factory.

“And did the Countenance Divine,
Shine forth upon our clouded hills?
And was Jerusalem builded here,
Among these dark Satanic Mills?”

Jerusalem by William Blake (1757 — 1827)


#FutureofAI

Monday, 17 October, 2016 0 Comments

On Thursday, President Barack Obama will host the Frontiers Conference in Pittsburgh to imagine the USA and the world in 50 years and beyond. Artificial Intelligence (AI) will play a growing role in this world and the White House has released a pre-conference report on considerations for AI called “Preparing for the Future of Artificial Intelligence” (PDF 1.1MB). The report looks at the state of AI, its existing and potential uses — data science, machine learning, automation, robotics — and the questions that it raises for society and policy. Snippet:

“Fairness, Safety, and Governance: As AI technologies gain broader deployment, technical experts and policy analysts have raised concerns about unintended consequences. The use of AI to make consequential decisions about people, often replacing decisions made by human actors and institutions, leads to concerns about how to ensure justice, fairness, and accountability—the same concerns voiced previously in the ‘Big Data’ context. The use of AI to control physical-world equipment leads to concerns about safety, especially as systems are exposed to the full complexity of the human environment.”


HI+AI

Sunday, 16 October, 2016 0 Comments

“An axe or a hammer is a passive extension of a hand, but a drone forms a distributed intelligence along with its operator, and is closer to a dog or horse than a device.” So says Bryan Johnson, founder and CEO of Kernel, which aims to develop biomedically engineered devices linked our central nervous system to restore and enhance human cognitive, motor and sensory abilities. In a word: neuroprosthetics.

“The combination of human and artificial intelligence will define humanity’s future” declares Johnson an article for TechCrunch that examines the interplay of artificial intelligence (AI) and human intelligence (HI). He argues that humanity has arrived at the border of intelligence enhancement, “which could be the most consequential technological development of our time, and in history.” Once we head into new country, the result could be people who need never need worry about forgetfulness again, or suffer the degradations of ALS, Parkinson’s, Alzheimer’s and Huntington’s diseases. Johnson is very much on the side of the Valley evangelists, but he feels obliged to add what has become the mandatory cautionary note:

“It is certainly true that with every new technology we create, new risks emerge that need thoughtful consideration and wise action. Medical advances that saved lives also made germ warfare possible; chemical engineering led to fertilizers and increased food production but also to chemical warfare. Nuclear fission created a new source of energy but also led to nuclear bombs.”

Despite mankind’s inherent wickedness, Bryan Johnson does not fear the future and warns against using “a fear-based narrative” as the main structure for discussing HI+AI. This would limit the imagination and curiosity that are at the core of being human.

“The basis of optimism is sheer terror,” said Oscar Wilde, who was born on this day in 1864 at 21 Westland Row in Dublin.