Friday, 21 August 2020

10 Things To Know About John Krafcik

John Krafcik CEO, Waymo

John Krafcik

CEO, Waymo; source wikipedia

John Krafcik (born September 18, 1961) is that the CEO of Waymo. Krafcik was the previous president of True Car Inc. and president and CEO of Hyundai Motor America. He was named CEO of Google's self-driving car project in September 2015. Krafcik remained CEO after Google separated its self-driving car project and transitioned it into a new company called Waymo, housed under Google's parent company Alphabet Inc.

Krafcik studied mechanical engineering at Stanford and a master's degree in management from the Massachusetts Institute of Technology. He studied under James P. Womack. (Womack first became widely known as an author in 1990 with the publication of the book The Machine That Changed the World, which made the term lean production known worldwide)

 With “incredible serendipity,” he worked for a year at a Xerox dot-matrix plant in Fremont, Calif. Then GM and Toyota announced they were opening the New United Motor Manufacturing plant in the same town, and Krafcik was hired as the plant’s first engineer.

Krafcik worked in traditional automotive manufacturing for several decades before moving to Google's self-driving car project in 2015. 

Quality & Manufacturing Engineering


 – 2 years
Fremont, California
His first job was at New United Motor Manufacturing, Inc., a venture between General Motors and Toyota, as a top-quality and manufacturing engineer from 1984 to 1986.

Wrote two "lean production" manifestos -- "Triumph of the Lean Production System" (Sloan Management Review, 1988) and the "Running the Factory" chapter of "The Machine That Changed the World" (Macmillan, 1990).

Lean Production Research and Consulting

International Motor Vehicle Program, MIT

 – 4 years
Cambridge, Massachusetts
He worked within the International automobile Program at MIT as a lean production researcher and consultant from 1986 to 1990. During this point, Krafcik traveled and studied 90 manufacturing plants in 20 countries, comparing their productivity and quality. 
His studies formed the info behind Womack's book, The Machine That Changed the planet. The book was a study on "lean production", a term Krafcik coined. 

Product Development

Ford Motor Company

 – 14 years
Dearborn, Michigan
1990-1994: Supervisor, Alpha Advanced Team, Truck Strategy & Planning, Truck Chassis Engineering.
1994-1997: Manager, European Transit Chassis Engineering
1998-2004: Chief Engineer, Ford Expedition/Lincoln Navigator

In 1990, Krafcik moved to Ford Motor Company where he held several positions, including chief engineer for the Ford Expedition and Lincoln Navigator within the late 1990s and early 2000s and therefore the chief engineer for truck chassis engineering.

Hyundai Motor America

9 years 10 months
  • VP, Product Development & Strategic Planning

     – 4 years 9 months
    Fountain Valley, Californi

    President and CEO

     – 5 years 2 months
    Fountain Valley, California

Krafcik started at Hyundai Motor America as vice chairman for development and strategic planning in 2004. Within a couple of years, he was promoted to become the president and CEO of Hyundai Motor America until the top of 2013. During Krafcik's tenure, Hyundai reported record sales and increased U.S. market share. 

Following the financial crisis of 2007–2008, Krafcik oversaw a gaggle at Hyundai to make an "Assurance Program". The program allowed Americans to return their new cars if they lost their jobs within a year.


TrueCar, Inc.

 – 1 year 6 months
Santa Monica, California
Krafcik moved to become president of True Car, Inc. in 2014 and served as a director of the company's board.



 – Present5 years
Mountain View, California
Waymo is an Alphabet technology company building the world's most experienced driver, making it safe & easy for people & things to move around the world.

Google hired Krafcik to go its self-driving cars unit in September 2015, because the company struggled to create relationships within the Detroit.

In 2018, Krafcik was awarded Smithsonian Magazine's American Ingenuity Award for Technology alongside Dmitri Dolgov


Sunday, 9 August 2020

History of AI (artificial intelligence) Brief explanation in chronological order

History of AI (artificial intelligence) Brief explanation in chronological order


AI (artificial intelligence), which everyone has once heard of, has come through a grand history while changing its form before being recognized so far. Since the first appearance of artificial intelligence in the 1950s, the history that has continued for more than 60 years is so deep that it cannot be expressed in a word or two, and it continues to have a great influence on our lives. In this article, we will explain the history of AI from its birth to the present and its future.

  • What is AI (Artificial Intelligence)?
  • 1950-1960: The emergence of AI
  • 1960-1974: First AI boom
  • 1974~1980: Winter era 1
  • 1980-1987: Second AI boom
  • 1987-1993: Winter Age 2
  • 1993-2020: Third AI boom
    • Innovation 1: Practical application of machine learning
    • Innovation 2: The advent of deep learning
  • New AI era

What is AI (artificial intelligence)?

Artificial intelligence, also known as AI (Artificial Intelligence), is written in the dictionary as "a computer system that has the functions of human intelligence such as learning, reasoning, and judgment ." 

This is Shakey, the first robot with the ability to move automatically, which was successfully developed by AI Lab at Stanford University in the late 1960s and early '70s. It is said that the invention of Shakey, who can avoid obstacles and move on his own will, promoted the AI ​​technology development at that time.

  • 1950-1960: The emergence of AI

The origin of the concept of artificial intelligence is found in the book Mathematics and Humans, published in 1950 by the English mathematician Alan TuringIn his book, if the meaning of the words "machine" or "thinking" is found by examining their common usage, the question "can a machine think?" I chanted. At that time, Mr. Turing said, "Whether the machine thinks depends on whether a conversation with a person has been established." This was called the "Turing test".

In addition to this, Alan Turing is also very famous as the leading figure who first theorized the concept of computers and led the victory of the World War II war against Germany by deciphering the German encryption machine "Enigma". If you're interested, why not watch the movie "The Imitation Game/The Secrets of Enigma and Genius Mathematician", which depicts his half-life.
The term "artificial intelligence" dates back to the Dartmouth Conference held by scientists in 1956At that time, at Dartmouth College in the United States of New Hampshire was a professor of mathematics John McCarthy is a "machine to think like a human," "artificial intelligence" was named.

With the establishment of the concept of artificial intelligence by tuning, and the fact that McCarthy defined the word artificial intelligence at the Dartmouth conference, AI was suddenly recognized by the scientists of the world, and research on AI also became active. I will go.

1960-1974: First AI boom

The first boom (the first AI boom) broke out in the 1960s. In this era, technologies called "reasoning" and "searching" exerted high performance on problems with clear rules, such as puzzles and simple games, and placed great expectations on artificial intelligence.
Among the many studies conducted during the first AI boomEliza, the first natural language processing program developed by Joseph Weizenbaum of the Massachusetts Institute of Technology in 1966, is particularly famous. It is said that Eliza was the origin of AI assistant Siri.

1974-1980: Winter era

However, artificial intelligence (AI) at the time could deal with simple hypothesis problems such as how to solve mazes and proofs of theorems, but real-world problems in which various factors are intertwined It becomes clear that can not be solved. Questions such as "Is AI really intelligent like humans?" spread among scientists.

When the limit of performance that can not solve real complex problems is seen, the boom goes down, research support is delayed, and AI development slows down. This is the winter era that lasted from 1974 to the early 1980s.

The impractical problem that artificial intelligence was able to solve at this time was called the "toy problem".

1980-1987: Second AI boom

The next boom (second AI boom) occurred in the 1980s. The trigger for the boom of this era is the realization of numerous expert systems.

The development of the expert system has been carried out since the 1960s, and in 1972 during the first AI boom, an expert system for diagnosing bacterial infection was developed, although it was not put into practical use. By the time of the second AI boom, many large companies introduced the expert system into their work, and the expert system has become widely used commercially as a practical tool.
Expert system mechanics are still being implemented among companies. The evaluation system of EC sites such as Amazon and Rakuten can be said to be an expert system that estimates and presents information. Recommendations that recommend a similar product from the product information that the person saw, or display a list of related news that the person might want to read next, from the news that the person sees daily The system is also an expert system.

1987-1993: Winter Age 2

However, it is gradually becoming clear to scientists who are conducting research that the expert system has limitations. There were two major drawbacks.

Disadvantage 1
Computers at the time could not collect and accumulate necessary information by themselves, so it was necessary for humans to manually write vast amounts of "general common sense" level knowledge to the computer.

Disadvantage 2 Since
computers at the time could not handle exception handling and inconsistent rules, the amount of knowledge that could actually be used had to be limited to information in specific areas.
Due to these reasons, the second AI boom has come to an end, and during the period from 1987 to 1993, AI research entered the winter era again.

2006-2020: Third AI boom


From 1993, when the winter era was over, to 2006, when deep learning was introduced, the foundation for the third AI boom was steadily starting. Even on the eve of the boom, the famous chess computer Deep Blue won the world chess champion in 1997. This is the moment when AI defeats humans for the first time and is still clearly memorable among people.

And now we are in the midst of the third AI boom. Two technological innovations are driving the third AI boom.

Machine learning," which artificial intelligence (AI) acquires knowledge by using a large amount of data called "big data", has been put to practical use.
Big data is
a huge data group that is difficult to record, store, and analyze with conventional database management systems.
Machine learning is
a technology in which a computer learns a large amount of data and automatically builds algorithms and models that perform tasks such as classification and prediction.

In conventional machine learning, humans have defined features and improved the accuracy of prediction and inference. By utilizing "deep learning", it has become possible to automatically extract the feature amount from the learning data and improve the accuracy.
Deep learning is
one of the implementation methods of machine learning. It is a technique for making a computer remember the tasks that humans perform and solving complicated problems.

What is a feature amount? A feature amount in
machine learning is a measurable characteristic used for learning input. For example, when distinguishing between red apples and blue apples, "color" is a feature amount. Humans unknowingly use appropriate features when identifying objects, but in conventional machine learning except deep learning, humans input features that should be used for identification. Until now, it has been difficult to teach appropriate features to artificial intelligence in complex problems such as “identifying human faces”.

"AI gains knowledge by itself" and "AI acquires features by itself" became a breakthrough in the field of artificial intelligence research, and was a catalyst for the current boom in artificial intelligence research.
Major events in the 3rd AI boom
2006: Practical method of deep learning appeared
2011: IBM Watson beats human in a quiz show
2012: ``Cat'' can be identified from image data by improving image recognition to
2015: Elon Musk et al donated to the open AI more than 100 billion yen
in 2016: "alpha Go" (computer Go program) is the first victory in the professional

New AI era

what kind of technological development will AI achieve in the future? With the technological innovation in the third AI boom, the word singularity has been attracting attention in recent years.

Singularity (technical singularity) refers to the point
at which technology such as AI can produce intelligence that is smarter than humans. The concept was first introduced by American mathematician Verner Vinge and advocated by Dr. Ray Kurzweil, the authority on artificial intelligence research.

Singularity is believed to make a huge difference in our employment. If AI has more intelligence than humans, it cannot be ruled out that AI can do the work that humans are doing now. Our way of working may change significantly in the next 10 or 20 years.

Over the 60 years since the birth of artificial intelligence to the present day, our lives have changed along with artificial intelligence. And the singularity that will come in the future has the potential to change the way we live today. In this article, I would like to deepen my understanding of artificial intelligence by looking back on the history of artificial intelligence and examining its future.