Machine Learning Concepts

Machine Learning Concepts

Machine learning is one of the biggest buzzwords of the decade, so understanding exactly what it is, knowing its limitations, and seeing the basics of how it works can be great when launching your tech career.

Machine learning is quite commonly used synonymously with artificial intelligence (AI), however they are not identical. Machine learning is in fact a type of AI. It is the most commonly used and researched method of achieving narrow artificial intelligence (‘narrow’ AI as opposed to ‘broad’ AI means we are not talking about the kind of AI that wants to take over the world from humans!). The simplest definition of machine learning is a computer learning without being explicitly programmed. Once it can do this, it can be used to complete tasks such as predicting trends or classifying objects.

This isn’t a machine learning course and so we won’t go into a lot of detail here. If you’re looking for a great course on machine learning, check out our info about courses here.

What is Machine Learning?

Traditional computer algorithms are hardcoded such that every eventuality is covered. For example, a complex weather prediction algorithm would have to consider every possible combination of factors such as temperature, humidity and wind speed. The more factors there are to consider, the more eventualities the algorithm has to consider and so the more difficult it is to write code for. In an age where there is an abundance of data, programmers and data scientists find that there are more factors to consider than ever. Not only does this make it very difficult to write accurate algorithms, but it makes it difficult to know which algorithms are best.

Machine learning takes datasets and “learns” what the algorithm should be based on this dataset. Instead of providing the exact logic for the weather prediction, the data scientist can provide all the weather data for the last 10 years and the machine will review this data and design its own algorithm. So, instead of writing code to predict weather, the data scientist can write code to learn how to predict the weather. The result of this is that quite often data scientists themselves don’t understand exactly how the algorithms that the computers have designed actually make decisions.

There are two main reasons why machine learning has only recently come to the forefront of technology despite the mathematical methods it is built upon having been well known for decades. Firstly, the learning process can be very computationally expensive to run (i.e. requires either a very powerful computer or a very long time) and it is only because of recent advances in hardware and software that this has become feasible.

Secondly, the output of a machine learning model will only be as good as the data that you input. So even if you feed the world’s best machine learning algorithm terrible data or not enough data, you will still get terrible results back. It’s only in the last decade or so that the world has been collecting enough high quality data to make this kind of learning possible.

How Does Machine Learning Work?

There are two main problems machine learning can solve: prediction and classification. Prediction algorithms use the data to predict future or unknown outcomes. This is similar to the weather scenario that we have been using, but it can also be used to predict house prices, coronavirus spread, or electricity grid demand patterns. Classification algorithms, on the other hand, are designed to classify unknown data. For example, classifying spam emails from non-spam emails, cancerous cells from healthy cells, and handwriting recognition.

There are several methods to have a machine learn for itself but the main two are supervised learning and unsupervised learning. In supervised learning algorithms are trained using labelled outcomes. For example, we would classify all our input data as either “spam” or “not spam”. It is then up to the machine learning algorithm to identify all future emails as “spam” or “not spam”. Supervised learning works great when the outcomes are known, however sometimes this is not the case. For example, when Netflix tries to recommend a new movie to watch or Spotify suggests a song to listen to, they don’t have labels to match to. Instead the machine learning algorithm has to figure out the patterns in the data on its own. This is unsupervised learning. By doing this, movies can be grouped by similar attributes and recommendations can be found using these new groups.

Why is Machine Learning Useful?

The most obvious use case for machine learning is that it allows more ‘smart’ algorithms to be developed that would have previously been possible. Banks are able to detect fraud more accurately, doctors diagnose cancer earlier, and energy companies plan demand and supply. All great ways of improving lives or saving money. This all works because the machine learning algorithms can identify patterns that humans otherwise would not notice. Not only this, but the algorithms can cover multiple factors at once, whereas human would struggle to design algorithms in more than a few dimensions.

Secondly, making data-driven decisions can be vital to a business’ survival. By ‘data-driven decisions’, we mean decisions that are based on numbers and measurable metrics rather than a hunch or what has always been done in the business. In today’s day and age, the volumes of data available are increasing rapidly and analysing this data manually is almost impossible. Machine learning enables more automation of data analysis, making it possible to drive decisions on more data. Even so, there remains massive amounts of data in companies which has not yet been exploited for all the valuable insight it holds!

What Are Some of the Problems with Machine Learning?

As mentioned before, since the computer designs the algorithm in its own right, the outcome can often be nonsensical. This lack of understanding can cause problems with accountability and in high risk businesses, such as financial trading, since it is not possible to have full faith in the outcomes of machine learning algorithms.

Although machine learning is a fantastic leap forwards in algorithm design, it does have its limitations. This is often something that isn’t understood at higher business levels and can lead to many companies attempting to overuse machine learning. Since machine learning is based entirely on data, it is impossible to predict situations for which we have no data. For example, an algorithm trying to predict the stock market wouldn’t be able to take into account unexpected events such as international tensions, pandemics or political changes which it had never seen before. A common misconception is that machine learning is “deterministic” however it is very much “stochastic” (i.e. susceptible to randomness).

Finally, there are ethical concerns around machine learning and artificial intelligence as a whole. A common argument is in a situation where a self-driving car must make the decision between killing an older person or a younger person due to a situation on the road. Part of an organisation’s consideration to use a machine learning ought to be the ethics of using said algorithm. This is why companies are beginning to hire machine learning ethicists at such a high rate.

What Tools Can I Try?

There are a gazillion machine learning tools and companies out there, some significantly better than others. If you’re just getting started though, there are some extremely common ones you should be at least a little familiar with.

Firstly are the Python libraries. The two big ones are PyTorch and Scikit-learn, both widely used and very powerful. Scikit-learn is a little more beginner friendly so if you are unsure, jump in with that one first by trying one of the many online walk-throughs or tutorials. There are many other Python machine learning libraries, often designed for a specific type of learning, such as NLTK (natural language toolkit) and Theano.

Secondly, there are other technologies such as TensorFlow, Keras, and Spark which are not designed specifically for Python but do integrate easily with it. With these, you’re likely to spend more time learning how to work the software than actually creating artificial intelligence. However they are all popular in industry so if you are pursuing a data science career, spend some time getting to know them. Again, the internet is rife in suitable tutorials and support.