Machine Learning (ML)

Your constantly-updated definition of Machine Learning (ML) and collection of videos and articles
77 shares

What is Machine Learning (ML)?

Show Hide video transcript
  1. Transcript loading…

Machine learning is a branch of artificial intelligence that uses data and algorithms to teach machines how to learn from experience and perform tasks that humans can do, such as recognizing images, analyzing data, or predicting outcomes. Machine learning can be divided into different types, such as supervised learning, unsupervised learning, and reinforcement learning, depending on how the machine learns from the data. Machine learning has many applications in various fields, such as e-commerce, finance, healthcare, education, and entertainment. Some examples of machine learning that you may encounter in your daily life are:

Object recognition is an import application of machine learning.

Machine learning works by using algorithms that learn from data and make predictions or classifications based on the patterns they find. Machine learning algorithms can be trained on labeled data (supervised learning), unlabeled data (unsupervised learning), or through trial and error (reinforcement learning). Machine learning algorithms can also be designed to learn from multiple sources of data (multi-task learning), learn from their own mistakes (self-learning), or learn from other algorithms (meta-learning). Machine learning algorithms are typically created using frameworks that accelerate solution development, such as TensorFlow and PyTorch.

What is the History of Machine Learning?

Experimental machine able to recognize the letters of the alphabet. Announced by the US Navy in 1960.

Here are some of the major milestones and events in the history of machine learning:

  • In 1623, Wilhelm Schickard invented the first mechanical calculator, which could perform arithmetic operations automatically.

  • In 1822, Charles Babbage designed the first programmable computer, called the analytical machine, which could run different algorithms and store data.

  • In 1830, Ada Lovelace, a mathematician and the first programmer, wrote the first algorithm for the analytical machine and envisioned that it could process symbols, words, and music, not just numbers.

  • In 1913, Andrey Markov introduced the concept of Markov chains, a mathematical model of stochastic processes that can capture the probability of future events based on past observations.

  • In 1936, Alan Turing proposed the idea of a universal machine, a theoretical device that could simulate any algorithm given a suitable program and data.

  • In 1943, Warren McCulloch and Walter Pitts developed the first artificial neural network, a simplified model of biological neurons that could perform logical operations.

  • In 1950, Alan Turing devised the Turing test, a criterion for judging whether a machine can exhibit human-like intelligence.

  • In 1952, Arthur Samuel created the first self-learning program, a checkers game that could improve its performance by learning from its own moves.

  • In 1956, John McCarthy, Marvin Minsky, Claude Shannon, and Nathaniel Rochester organized the Dartmouth Summer Research Project on Artificial Intelligence, the first conference on AI and machine learning, where the term "artificial intelligence" was coined.

  • In 1957, Frank Rosenblatt invented the perceptron, a type of neural network that could learn to classify patterns using a simple learning rule.

  • In 1959, Noam Chomsky introduced the concept of generative grammar, a formal system for describing the syntax and semantics of natural languages.

  • In 1963, Edward Feigenbaum and Julian Feldman edited the first book on machine learning, Computers and Thought, which collected papers from various researchers in the field.

  • In 1965, Alexey Ivakhnenko and Valentin Lapa published the first general method of deep learning, the Group Method of Data Handling, which used polynomial neural networks to model complex systems.

  • In 1967, Fredkin, Landauer, and Toffoli proposed the concept of reversible computing, a model of computation that can conserve energy and avoid information loss.

  • In 1969, Marvin Minsky and Seymour Papert published the book Perceptrons, which analyzed the limitations of single-layer neural networks and caused a decline in neural network research.

  • In 1972, Richard Sutton and Andrew Barto began their work on reinforcement learning, a type of machine learning that learns from its own actions and rewards.

  • In 1974, John Holland introduced the concept of genetic algorithms, a type of evolutionary computation that mimics natural selection to optimize solutions.

  • In 1979, Tom Mitchell formulated the definition of machine learning as "a computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E".

  • In 1982, John Hopfield proposed the Hopfield network, a type of recurrent neural network that can store and retrieve patterns as attractors of its dynamics.

  • In 1985, Judea Pearl published the book Bayesian Networks, which introduced a graphical model for representing and reasoning with probabilistic knowledge.

  • In 1986, Geoffrey Hinton, David Rumelhart, and Ronald Williams published the paper Learning representations by back-propagating errors, which popularized the backpropagation algorithm for training multi-layer neural networks.

  • In 1989, Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner developed LeNet, the first convolutional neural network, which could recognize handwritten digits.

  • In 1992, Ross Quinlan developed C4.5, an improved version of his earlier ID3 algorithm for generating decision trees from data.

  • In 1995, Vladimir Vapnik and Corinna Cortes introduced the support vector machine, a powerful classifier that uses the kernel trick to map data into high-dimensional spaces.

  • In 1997, IBM's Deep Blue defeated world chess champion Garry Kasparov, demonstrating the power of machine learning and search algorithms.

  • In 1998, Sergey Brin and Larry Page founded Google, a search engine that used the PageRank algorithm to rank web pages based on their links.

Google's Pagerank algorithm, named after Larry Page, is considered to be a form of machine learning.

Public Domain

  • In 2001, Doug Cutting and Mike Cafarella created Hadoop, an open-source framework for distributed storage and processing of large-scale data.

  • In 2006, Geoffrey Hinton, Simon Osindero, and Yee-Whye Teh published the paper A fast learning algorithm for deep belief nets, which revived the interest in deep learning by showing how to pre-train neural networks layer by layer using restricted Boltzmann machines.

  • In 2009, Fei-Fei Li, Jia Deng, and Kai Li created ImageNet, a large-scale database of annotated images that became a benchmark for image recognition and computer vision.

  • In 2010, Dan Jurafsky and James Martin published the second edition of the book Speech and Language Processing, which covered the state-of-the-art methods and applications of natural language processing.

  • In 2011, IBM's Watson defeated human champions at Jeopardy!, a quiz show that requires natural language understanding and knowledge retrieval.

  • In 2012, Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) using a deep convolutional neural network called AlexNet, which achieved a significant improvement over previous methods.

  • In 2013, Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean developed word2vec, a neural network model that learns distributed representations of words from large corpora.

  • In 2014, Ian Goodfellow, Yoshua Bengio, and Aaron Courville published the book Deep Learning, which provided a comprehensive overview of the theory and practice of deep learning.

  • In 2015, Google's AlphaGo defeated Lee Sedol, a professional Go player, using a combination of deep neural networks and reinforcement learning.

  • In 2016, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio introduced the concept of attention, a mechanism that allows neural networks to focus on relevant parts of the input or output.

  • In 2017, Google's AlphaGo Zero achieved superhuman performance at Go by learning from scratch, without any human data or guidance.

  • In 2018, OpenAI's GPT-2 generated coherent and fluent text from a given prompt, using a large-scale transformer-based language model.

  • In 2019, Google's BERT achieved state-of-the-art results on several natural language processing tasks, using a bidirectional transformer-based language model that can leverage both left and right context.

  • In 2020, OpenAI's GPT-3 generated impressive text across various domains and genres, using a massive transformer-based language model with 175 billion parameters.

  • In 2021, Google's AlphaFold 2 predicted the 3D structure of proteins from their amino acid sequences, using a deep learning system that incorporates attention, graph neural networks, and multiple sequence alignment.

A sample image from an ImageNet object recognition challenge, started in 2010. There are often hundreds of thousands of images that competitors systems are trained and tested with.

ImageNet, Public Domain

What are the Applications of Machine Learning?

 Machine learning has many applications in various domains, such as:

  • Social media personalization: Machine learning algorithms can analyze user behavior and preferences to tailor the content and ads that they see on social media platforms. For example, Facebook uses machine learning to rank posts, stories, and videos on its News Feed.

  • Image recognition: Machine learning algorithms can process and classify images based on their features and labels. For example, Google Photos uses machine learning to organize and search photos by faces, places, and things.

  • Business intelligence optimization: Machine learning algorithms can help businesses analyze large amounts of data and generate insights and predictions that can improve decision making and performance. For example, Airbnb uses machine learning to optimize its pricing, ranking, and search algorithms.

  • TV, movie and video recommendations: Machine learning algorithms can learn from user ratings, reviews, and viewing history to recommend relevant and personalized content. For example, Netflix uses machine learning to suggest shows and movies that users might like.

  • Healthcare personalization: Machine learning algorithms can help diagnose diseases, recommend treatments, and monitor patient health. For example, IBM Watson uses machine learning to analyze medical records, images, and literature to provide evidence-based recommendations for cancer care.

What are the Future Prospects for Machine Learning?

The future prospects for machine learning are bright and promising, as this technology is constantly evolving and improving. Based on current trends and research, here are some possible scenarios for the future of machine learning:

  • Machine learning will be an integral part of all AI systems, large or small, enabling them to adapt and improve over time based on new data and feedback.

  • Machine learning will be offered as a cloud-based service, known as Machine Learning-as-a-Service (MLaaS), allowing users to access various tools and platforms for data analysis, model building, training, testing, and deployment.

  • Machine learning will enable connected AI systems to “continuously learn” from the internet and other sources of information, enhancing their knowledge and capabilities.

  • Machine learning will empower human creativity and innovation, as it will assist in generating novel and original content, such as art, music, literature, and code.

  • Machine learning will improve the quality of life and well-being of people, as it will help in diagnosing diseases, recommending treatments, monitoring health, providing education, and more.

References and Where to Learn More

 

Questions about Machine Learning

How does machine learning work?

Machine learning uses data and algorithms to create models that perform tasks without explicit instructions.

What can machine learning do?

Machine learning can do many things that require human intelligence or expertise, such as recognizing faces, translating languages, diagnosing diseases, etc.

Where is machine learning used?

Machine learning is used in almost every industry and domain, such as healthcare, education, finance, retail, manufacturing, entertainment, transportation, etc.

Why is machine learning important?

Machine learning is important because it can help solve complex problems, improve efficiency and productivity, enhance customer experience, and create new opportunities and innovations.

Which machine learning model to use?

The choice of machine learning model depends on the type and amount of data available, the goal and complexity of the task, the performance and accuracy required, and the resources and constraints involved. Some common types of machine learning models are linear regression, logistic regression, decision tree, k-means clustering, neural network, support vector machine, etc.

Are neural networks used in machine learning?

Neural networks are often used in deep learning, which is a branch of machine learning that builds complex models from large data sets.

Literature on Machine Learning (ML)

Here’s the entire UX literature on Machine Learning (ML) by the Interaction Design Foundation, collated in one place:

Learn more about Machine Learning (ML)

Take a deep dive into Machine Learning (ML) with our course AI for Designers .

In an era where technology is rapidly reshaping the way we interact with the world, understanding the intricacies of AI is not just a skill, but a necessity for designers. The AI for Designers course delves into the heart of this game-changing field, empowering you to navigate the complexities of designing in the age of AI. Why is this knowledge vital? AI is not just a tool; it's a paradigm shift, revolutionizing the design landscape. As a designer, make sure that you not only keep pace with the ever-evolving tech landscape but also lead the way in creating user experiences that are intuitive, intelligent, and ethical.

AI for Designers is taught by Ioana Teleanu, a seasoned AI Product Designer and Design Educator who has established a community of over 250,000 UX enthusiasts through her social channel UX Goodies. She imparts her extensive expertise to this course from her experience at renowned companies like UiPath and ING Bank, and now works on pioneering AI projects at Miro.

In this course, you’ll explore how to work with AI in harmony and incorporate it into your design process to elevate your career to new heights. Welcome to a course that doesn’t just teach design; it shapes the future of design innovation.

In lesson 1, you’ll explore AI's significance, understand key terms like Machine Learning, Deep Learning, and Generative AI, discover AI's impact on design, and master the art of creating effective text prompts for design.

In lesson 2, you’ll learn how to enhance your design workflow using AI tools for UX research, including market analysis, persona interviews, and data processing. You’ll dive into problem-solving with AI, mastering problem definition and production ideation.

In lesson 3, you’ll discover how to incorporate AI tools for prototyping, wireframing, visual design, and UX writing into your design process. You’ll learn how AI can assist to evaluate your designs and automate tasks, and ensure your product is launch-ready.

In lesson 4, you’ll explore the designer's role in AI-driven solutions, how to address challenges, analyze concerns, and deliver ethical solutions for real-world design applications.

Throughout the course, you'll receive practical tips for real-life projects. In the Build Your Portfolio exercises, you’ll practise how to  integrate AI tools into your workflow and design for AI products, enabling you to create a compelling portfolio case study to attract potential employers or collaborators.

All open-source articles on Machine Learning (ML)

Please check the value and try again.

Open Access—Link to us!

We believe in Open Access and the democratization of knowledge. Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.

If you want this to change, , link to us, or join us to help us democratize design knowledge!