Subscribe via RSS Feed Connect on Google Plus Connect on Flickr

Tag: MIT

The decline and fall of grammar and style at Inc.

Tuesday, 24 April, 2018 0 Comments

“One of the most common conversations among business travelers have among each other is to discuss how to pack optimally for your next trip. As someone with more than two million miles of experience under by belt, I have developed several tricks and hacks to pack light…”

Wut? You might be inclined to think such a rubbish sentence was created by some badly programmed AI, but it wasn’t. In fact, it’s the opening of an article published by Inc. that’s so riddled with grammatical and stylistic errors that it’s comically unreadable. “How Many Pairs of Underwear Should You Pack On Your Business Trip? 2 Million Miler Packing Secrets” is the title of this gem and it was “written” by one Jim Schleckser, who styles himself “CEO, Inc. CEO Project”.

Inc.

History: Inc. was founded in Boston in 1979 by Bernie Goldhirsh, an MIT-trained engineer who had worked at Polaroid before founding Sail magazine, which he sold for $10 million. He used the profits to launch Inc.

In 2000, the terminally ill Goldhirsh sold Inc. to German publisher Gruner + Jahr for a reported $200 million. It was the peak of the dot com mania, after all. In 2005, after sobering up, Gruner + Jahr offloaded Inc. for $35 million to Joe Mansueto, CEO of Morningstar. Now, apparently, Joe the billionaire cannot afford to employ copy editors.


Whither work?

Thursday, 17 November, 2016 0 Comments

“It’s one of the dirty secrets of economics: technology progress does grow the economy and create wealth, but there is no economic law that says everyone will benefit.” — Erik Brynjolfsson

Who he? The Director of the MIT Initiative on the Digital Economy and author of the best-selling The Second Machine Age, is he. Brynjolfsson maintains that in the race against the machine, some are likely to win while many are likely to lose. It’s a view that’s gaining traction as pessimism about the role of technology in a globalized economy increases, but Stephen DeWitt is more optimistic.

He’s held senior positions at HP, Cisco and Symantec, but instead of retiring, he became the CEO at Work Market, a rapidly-growing platform that’s reformulating the worker-employer equation. Backed by New York VC Fred Wilson, Work Market helps connect workers with companies that need to get stuff done.

The concept isn’t new. The “gig economy” of Uber and TaskRabbit is familiar to many, but DeWitt believes that this “on demand economy” will include all kinds of work eventually. Millions of people are stuck in jobs that are unnecessary and inefficient, he argues, and points out that by 2030 there will be 3.2 billion skilled workers on earth, all connected to the internet. Will a company filled with full time workers be the ideal model then? Or might the model be an agile core of managers assigning work to a network of workers competing for projects based on their skills, reputations and their ability to deliver results? That could spell the end of unnecessary and inefficient jobs. Or it might lead to a dystopia. We are approaching the crossroads and we’ll have to turn left or right.

“If you want something new, you have to stop doing something old.” — Peter Drucker

The gig economy


Will we create a new class of robot slaves?

Tuesday, 28 June, 2016 0 Comments

That’s the question posed by Joi Ito, the Japanese entrepreneur, venture capitalist, academic and Director of the MIT Media Lab. Ito is concerned that Artificial Intelligence (AI) and other technologies might create a “productivity abundance” that would end the the financial need to work. On the face of it, this should not be a cause of great concern, given that many people hate their jobs. But there’s more to work than labour, Ito argues. It confers social status and gives a purpose. The solution? Disassociate the notion of work from productivity. The role model? Periclean Athens, which Ito terms “a moral society where people didn’t need to work to be engaged and productive.” In a post titled The Future of Work in the Age of Artificial Intelligence, Ito asks:

“Could we image a new age where our self-esteem and shared societal value is not associated with financial success or work as we know it?… A good first step would be to begin work on our culture alongside our advances in technology and financial innovations so that the future looks more like Periclean Athens than a world of disengaged kids with nothing to do. If it was the moral values and virtues that allowed Periclean Athens to function, how might we develop them in time for a world without work as we currently know it?”

To his credit, Ito appends this note to his suggestion: “There were many slaves in Periclean Athens. For the future machine age, will be need to be concerned about the rights of machines? Will we be creating a new class of robot slaves?”

We looked at that very issue in our Monday post here: When will the e-people be allowed to vote?


Drones for Good: Loon Copter wins $1 million prize

Sunday, 7 February, 2016 0 Comments

The winner of the $1 million prize at the Drones for Good event in Dubai this weekend was the Loon Copter, a prototype drone that can fly, float and swim underwater. Equipped with a “buoyancy chamber” that fills with water, the drone can sink beneath the surface, tilt 90 degrees and use its four rotors to swim around. This piece of ingenuity is the product of the Embedded System Research Lab at Oakland University in Rochester, Michigan. Its potential uses include searching for sunken objects, environmental monitoring and underwater structure inspection.

The Robotics Award for Good went to SuitX, an exoskeleton system designed to improve the physiological gait development of children. It’s a product of the Robotics and Human Engineering Laboratory at the University of California. “SuitX is just one of the companies hoping to boost interest in exoskeleton research,” writes Signe Brewster in MIT Technology Review. “Competing suits like the ReWalk, which costs $70,000 and weighs about 50 pounds, are striving to reduce costs while improving functionality. If exoskeleton makers can drive suit costs down to a few thousand dollars, they could start competing with motorized wheelchairs.”

The winners of the UAE national competition were the BuilDrone team, who designed a drone that can detect and repair leaks in pipelines, and students from Ajman University, who developed a smart guidance system for the blind that assists them in avoiding obstacles using a vibration signals.

Yes, we need to keep a close watch on those nerds, but drones, robots and AI can be, and are, a force for good.


Minsky and Mozart

Wednesday, 27 January, 2016 1 Comment

In a blog post tilted Farewell, Marvin Minsky (1927 – 2016), Stephen Wolfram, Founder & CEO of Wolfram Research, pays tribute to the American pioneer of artificial intelligence and co-founder of the AI Lab at MIT, who died on Sunday. Snippet:

“Marvin immediately launched into talking about how programming languages are the only ones that people are expected to learn to write before they can read. He said he’d been trying to convince Seymour Papert that the best way to teach programming was to start by showing people good code. He gave the example of teaching music by giving people Eine kleine Nachtmusik, and asking them to transpose it to a different rhythm and see what bugs occur. (Marvin was a long-time enthusiast of classical music.)”

RIP, Marvin Minsky, genius and trailblazer of advances in mathematics, computational linguistics, optics and robotics. Apropos Minsky’s genius and love of classical music, as the world knows, Eine kleine Nachtmusik (Serenade No. 13 for strings in G major) is a famous chamber ensemble composition by Wolfgang Amadeus Mozart and today happens to be his birthday. Happy 260th, dear Mozart!


It’s different this time

Tuesday, 3 February, 2015 0 Comments

Since the Industrial Revolution, there’s been an almost insatiable demand for labour, despite the relentless advance of technology. So why should it be any different this time. Surely, the cloud will create millions of jobs and the app industry will generate global employment? Well, yes, maybe. But let’s consider this: It took the United States some 200 years to change from an agricultural economy, where 90 percent of the people worked on farms, to the current situation, where the number is nearer two percent. The robotics/AI revolution is happening faster than its industrial and digital predecessors — and it will present an even bigger challenge.

Technologies such as the self-driving car will be dramatically disruptive, but over a much shorter time-frame. There are millions of truck drivers working today. What will happen if self-driving vehicles put them out of a job in a matter of years? Algorithms are getting better at translating and writing — jobs that once required humans. So what will we do for work? That is the question being posed by the MIT academics Erik Brynjolfsson and Andrew McAfee, who say that we’re entering a “Second Machine Age,” where the increasing rate of change driven by information technologies could leave swathes of medium-and-low skilled workers in the slow lane. On the upside, the human ability to innovate offers grounds for hope. They say.


How do people get new ideas?

Friday, 24 October, 2014 0 Comments

“Thomas H. Huxley is supposed to have exclaimed after reading On the Origin of Species, ‘How stupid of me not to have thought of this.’ But why didn’t he think of it?” That was the question posed in 1959 by Isaac Asimov in an essay he wrote for an MIT spinoff, Allied Research Associates in Boston. Arthur Obermayer, a friend of the author, found the piece “while cleaning out some old files” and immediately recognized its relevance for the contemporary debate about creativity. It was published earlier this week in the MIT Technology Review. Snippets:

“Consequently, the person who is most likely to get new ideas is a person of good background in the field of interest and one who is unconventional in his habits. (To be a crackpot is not, however, enough in itself.)”

“My feeling is that as far as creativity is concerned, isolation is required… The presence of others can only inhibit this process, since creation is embarrassing.”

“The world in general disapproves of creativity, and to be creative in public is particularly bad. Even to speculate in public is rather worrisome.”

Asimov did concede that group thinking by ‘creatives’ might worthwhile now and then, as “a meeting of such people may be desirable for reasons other than the act of creation itself.” He argued, however, that “a meeting in someone’s home or over a dinner table at some restaurant is perhaps more useful than one in a conference room.” And a few drinks might be in order, too, because “there should be a feeling of informality. Joviality, the use of first names, joking, relaxed kidding are, I think, of the essence — not in themselves, but because they encourage a willingness to be involved in the folly of creativeness.”


Martin Jacques: Our nominee for the Shaw-Duranty-Thurow Prize

Monday, 2 April, 2012

The Irish-born playwright George Bernard Shaw became an apologist for totalitarianism after being invited to visit the Soviet Union in 1931. In his intellectual conceit, the author of such works as Major Barbara and Pygmalion turned a blind eye to the murder of millions in the name of Communism, and he expended a lot of […]

Continue Reading »

The 8 Rules of Tweeting

Monday, 6 February, 2012

Last year, Paul André of Carnegie Mellon, Michael Bernstein of MIT and Kurt Luther of Georgia Tech created Who Gives a Tweet? for the purpose of analyzing those 140-character messages. The 43,000 responses they collected led to “Who Gives A Tweet? Evaluating Microblog Content Value” (PDF).

Academic prose is not everyone’s thing, so Megan Garber of The Atlantic has filtered the research findings into “Be Better at Twitter: The Definitive, Data-Driven Guide“. The result is distilled down into what we might call “Garber’s Eight Rules for Tweeting”: Old news is no news, Contribute to the story, Keep it short, Limit Twitter-specific syntax, Keep it to yourself, Provide context, Don’t whine and Be a tease.

Will these stand the test of time as well as George Orwell’s Six Rules of Writing have done? The author of Nineteen Eighty-Four was ahead of the game in 1948 when he advised “Never use a long word where a short one will do” & “If it is possible to cut a word out, always cut it out”. Orwell rules.