This site is part of the Informa Connect Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Quant Finance
search
AI & Machine Learning

From Artificial Intelligence to Machine Learning

Posted by on 18 August 2017
Share this article

AI & Machine learning are set to have a huge impact on the finance industry. Dr. Paul Bilokon talks through the history of AI below and will be discussing how big financial data and machine learning can be handled efficiently for high frequency trading.

Ever since the dawn of humanity inventors have been dreaming of creating sentient and intelligent beings. Hesiod (c. 700 BC) describes how the smithing god Hephaestus, by command of Zeus, moulded earth into the first woman, Pandora. The story wasn’t a particularly happy one: Pandora opened a jar – Pandora’s box – releasing all the evils of humanity, leaving only Hope inside when she closed the jar again. Ovid (43 BC – 18 AD) tells us a happier legend, that of a Cypriot sculptor, Pygmalion, who carved a woman out of ivory and fell in love with her. Miraculously, the statue came to life. Pygmalion married Galatea (that was the statue’s name) and it is rumoured that the town Paphos is named after the couple’s daughter.

Stories of golems, anthropomorphic beings created out of mud, earth, or clay, originate in the Talmud and appear in the Jewish tradition throughout the Middle Ages and beyond. One such story is connected to Rabbi Judah Loew ben Bezalel (1512 – 1609), the Maharal of Prague. Rabbi Loew’s golem protected the Jewish community and helped with manual labour. Unfortunately, the golem ran amok and had to be neutralised by removing the Divine Name from his forehead.

Mary Shelley’s Gothic novel Frankenstein; or, The Modern Prometheus (first published in London in 1818) tells us about another early experiment in artificial life. That one also went wrong: the Monster, created by the scientist Victor Frankenstein, turned on his creator.

From Fantasy to Reality

More than a century had passed since Frankenstein’s fictional experiment when, in summer of 1956, a group of researchers gathered at a workshop organised by John McCarthy, then a young Assistant Professor of Mathematics, at Dartmouth College in Hanover, New Hampshire. Marvin Minsky, Trenchard More, Oliver Selfridge, Claude Shannon, Herbert Simon, Ray Solomonoff, and Nathaniel Rochester were among the attendees. The stated goal was ambitious: “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” Thus the field of artificial intelligence, or AI, was born.

The participants were optimistic about the success of this endeavour. In 1965, Simon proclaimed that “machines will be capable, within twenty years, of doing any work a man can do”. Around the same time, Minsky estimated that within a single generation “the problem of creating ‘artificial intelligence’ will substantially be solved”.

Early Success

Indeed, there had been some early successes. In 1960, Donald Michie built the Machine Educable Noughts And Crosses Engine (MENACE) out of matchboxes. This machine learned to play Tic-Tac-Toe without knowing any of the game’s rules. In his 1964 PhD thesis, Daniel G. Bobrow produced the Lisp program STUDENT that could solve high school algebra word problems. Around the same time, Joseph Weizenbaum’s program ELIZA (named after Eliza Doolittle) used pattern matching to conduct a “conversation” with a human – this was one of the first chatbots. 

One successful idea pioneered by Minsky and Seymour Papert at MIT was the blocks world, an idealised physical setting consisting of wooden blocks of various shapes and colours placed on a flat surface. The researchers built a system (1967-1973) consisting of a robotic arm and a video camera that could manipulate these blocks. Minsky reasoned that the real world consisted of relatively simple interacting agents – much like the program operating the robotic arm: “Each mental agent by itself can only do some simple thing that needs no mind or thought at all. Yet when we join these agents in societies – in certain very special ways – this leads to true intelligence.” Such multiagent systems are actively studied by computer scientists to this day.

The blocks world also aided some early advances in natural language understanding. Terry Winograd’s SHRDLU2 (1968 – 1970), a micro planner written in Lisp, could converse with a human and discuss the objects and their properties within a blocks world.

AI Winter

Not everyone was as optimistic about AI as Minsky and Simon. In 1966 the US National Research Council’s Automatic Language Processing Advisory Committee (ALPAC) produced a damning report on machine translation. The report concluded that machine translation was inaccurate, slow, and expensive. Across the Atlantic Ocean, Sir James Lighthill submitted Artificial Intelligence: A General Survey (1973) to the British Science Research Council. The survey stated that “in no part of the field have discoveries made so far produced the major impact that was then promised”. Funding for AI research began to dry up – an AI winter had set in. By late 1980s, the Lisp machine market had collapsed. By early 1990s, the use of expert systems, usually built in Lisp, had declined.

The claims made by the first generation of AI researchers had backfired. Moreover, AI became a moving target: once computers learned to do something that only humans were capable of up until then, it was no longer regarded as AI.

Modern Triumphs

However, more successes followed. In 1976, a computer was used to prove the long-standing “four-colour theorem”. Kenneth Appel and Wolfgang Haken resorted to an exhaustive analysis of many particular cases by a computer. In 1996, William McCune developed an automated reasoning system that proved Herbert Robbins’ conjecture that all Robbins algebras are Boolean algebras. McCune’s program used a method that was deemed sufficiently creative by humans. A year later, IBM’s supercomputer Deep Blue defeated Garry Kasparov, then a reigning world chess champion. In 2014, in a University of Reading Turing test competition organised at the Royal Society by Huma Shah and Kevin Warwick, a Russian chatbot Eugene Goostman convinced 33% of the judges that it was human – thus passing the Turing test.

The foundations of self-driving cars were laid down during the No Hands Across America Navlab 5 USA tour in 1995, when two researchers from Carnegie Mellon University’s Robotics Institute “drove” from Pittsburg, PA to San Diego, CA using the Rapidly Adapting Lateral Position Handler (RALPH) to steer while the scientists handled the throttle and brake. The system drove the van all but 52 of the 2,849 miles by day and by night. Today Oxford has its own RobotCar – it is being developed by a team led by Will Maddern.

Since, to quote the Economist (2007.06.07), the term artificial intelligence is “associated with systems that have all too often failed to live up to their promises”, scientists prefer to talk about machine learning (ML). Intel’s Nidhi Chapel explains the difference: “AI is basically the intelligence – how we make machines intelligent, while machine learning is the implementation of the computer methods that support it. The way I think of it is: AI is the science and machine learning the algorithms that make the machines smarter. The enabler for AI is machine learning.”

The field of machine learning is thriving today, largely fuelled by advances in computational capabilities and deep neural networks. This, however, merits a separate article. We shall conclude this one with a quote from Michael Beeson’s The Mechanization of Mathematics (2004): “In 1956, Herb Simon, one of the ‘fathers of artificial intelligence’, predicted that within ten years computers would beat the world chess champion, compose ‘aesthetically satisfying’ original music, and prove new mathematical theorems. It took forty years, not ten, but all these goals were achieved – and within a few years of each other!”

To download the article, including full references and images, click here!

QuantMinds International 2019, Paul Bilokon, Thalesians

Dr. Paul A. Bilokon is an alumnus of Christ Church, Oxford, and Imperial College, is a quant trader, scientist, and entrepreneur.  He is the Founder and CEO of Thalesians Ltd – a consultancy and think tank focussing on new scientific and philosophical thinking in finance. He will be discussing the importance of neocybernetics at this year's QuantMinds International.

Share this article

Sign up for Quant Finance email updates

keyboard_arrow_down