The progression of machine learning

The evolution of machine learning – TechCrunch

The progression of machine learning

Major tech companies have actively reoriented themselves around AI and machine learning: Google is now “AI-first,” Uber has ML running through its veins and internal AI research labs keep popping up.

They’re pouring resources and attention into convincing the world that the machine intelligence revolution is arriving now. They tout deep learning, in particular, as the breakthrough driving this transformation and powering new self-driving cars, virtual assistants and more.

Despite this hype around the state of the art, the state of the practice is less futuristic.

Software engineers and data scientists working with machine learning still use many of the same algorithms and engineering tools they did years ago.

That is, traditional machine learning models — not deep neural networks — are powering most AI applications.

Engineers still use traditional software engineering tools for machine learning engineering, and they don’t work: The pipelines that take data to model to result end up built scattered, incompatible pieces.

There is change coming, as big tech companies smooth out this process by building new machine learning-specific platforms with end-to-end functionality.

Large tech companies have recently started to use their own centralized platforms for machine learning engineering, which more cleanly tie together the previously scattered workflows of data scientists and engineers.

What goes into a machine learning sandwich

Machine learning engineering happens in three stages — data processing, model building and deployment and monitoring. In the middle we have the meat of the pipeline, the model, which is the machine learning algorithm that learns to predict given input data.

That model is where “deep learning” would live. Deep learning is a subcategory of machine learning algorithms that use multi-layered neural networks to learn complex relationships between inputs and outputs. The more layers in the neural network, the more complexity it can capture.

Traditional statistical machine learning algorithms (i.e. ones that do not use deep neural nets) have a more limited capacity to capture information about training data.

But these more basic machine learning algorithms work well enough for many applications, making the additional complexity of deep learning models often superfluous.

So we still see software engineers using these traditional models extensively in machine learning engineering — even in the midst of this deep learning craze.

But the bread of the sandwich process that holds everything together is what happens before and after training the machine learning model.

The first stage involves cleaning and formatting vast amounts of data to be fed into the model. The last stage involves careful deployment and monitoring of the model. We found that most of the engineering time in AI is not actually spent on building machine learning models — it’s spent preparing and monitoring those models.

The meat of machine learning — and avoiding exotic flavors

Despite the focus on deep learning at the big tech company AI research labs, most applications of machine learning at these same companies do not rely on neural networks and instead use traditional machine learning models.

The most common models include linear/logistic regression, random forests and boosted decision trees.

These are the models behind, among other services tech companies use, friend suggestions, ad targeting, user interest prediction, supply/demand simulation and search result ranking.

And some of the tools engineers use to train these models are similarly well-worn. One of the most commonly used machine learning libraries is scikit-learn, which was released a decade ago (although Google’s TensorFlow is on the rise).

There are good reasons to use simpler models over deep learning. Deep neural networks are hard to train. They require more time and computational power (they usually require different hardware, specifically GPUs). Getting deep learning to work is hard — it still requires extensive manual fiddling, involving a combination of intuition and trial and error.

With traditional machine learning models, the time engineers spend on model training and tuning is relatively short — usually just a few hours. Ultimately, if the accuracy improvements that deep learning can achieve are modest, the need for scalability and development speed outweighs their value.

Attempting to stick it all together — tools from data to deployment

So when it comes to training a machine learning model, traditional methods work well. But the same does not apply to the infrastructure that holds together the machine learning pipeline. Using the same old software engineering tools for machine learning engineering creates greater potential for errors.

The first stage in the machine learning pipeline — data collection and processing — illustrates this. While big companies certainly have big data, data scientists or engineers must clean the data to make it useful — verify and consolidate duplicates from different sources, normalize metrics, design and prove features.

At most companies, engineers do this using a combination SQL or Hive queries and Python scripts to aggregate and format up to several million data points from one or more data sources.

This often takes several days of frustrating manual labor.

Some of this is ly repetitive work, because the process at many companies is decentralized — data scientists or engineers often manipulate data with local scripts or Jupyter Notebooks.

Furthermore, the large scale of big tech companies compounds errors, making careful deployment and monitoring of models in production imperative. As one engineer described it, “At large companies, machine learning is 80 percent infrastructure.”

However, traditional unit tests — the backbone of traditional software testing — don’t really work with machine learning models, because the correct output of machine learning models isn’t known beforehand.

After all, the purpose of machine learning is for the model to learn to make predictions from data without the need for an engineer to specifically code any rules.

So instead of unit tests, engineers take a less structured approach: They manually monitor dashboards and program alerts for new models.

And shifts in real-world data may make trained models less accurate, so engineers re-train production models on fresh data on a daily to monthly basis, depending on the application.

But a lack of machine learning-specific support in the existing engineering infrastructure can create a disconnect between models in development and models in production — normal code is updated much less frequently.

Many engineers still rely on rudimentary methods of deploying models to production, saving a serialized version of the trained model or model weights to a file.

Engineers sometimes need to rebuild model prototypes and parts of the data pipeline in a different language or framework, so they work on production infrastructure.

Any incompatibility from any stage of the machine learning development process — from data processing to training to deployment to production infrastructure — can introduce error.

Making it presentable — the road forward

To address these issues, a few big companies, with the resources to build custom tooling, have invested time and engineering effort into creating their own machine learning-specific tools. Their goal is to have a seamless, end-to-end machine learning platform that is fully compatible with the company’s engineering infrastructure.

’s Learner Flow and Uber’s Michelangelo are internal machine learning platforms that do just that.

They allow engineers to construct training and validation data sets with an intuitive user interface, decreasing time spent on this stage from days to hours.

Then, engineers can train models with (more or less) the click of a button. Finally, they can monitor and directly update production models with ease.

Services Azure Machine Learning and Amazon Machine Learning are publicly available alternatives that provide similar end-to-end platform functionality but only integrate with other Amazon or Microsoft services for the data storage and deployment components of the pipeline.

Despite all the emphasis big tech companies have placed on enhancing their products with machine learning, at most companies there are still major challenges and inefficiencies in the process. They still use traditional machine learning models instead of more-advanced deep learning, and still depend on a traditional infrastructure of tools poorly suited to machine learning.

Fortunately, with the current focus on AI at these companies, they are investing in specialized tools to make machine learning work better. With these internal tools, or potentially with third-party machine learning platforms that are able to integrate tightly into their existing infrastructures, organizations can realize the potential of AI.

A special thank you to Irving Hsu, David Eng, Gideon Mann and the Bloomberg Beta team for their insights.

Источник: https://techcrunch.com/2017/08/08/the-evolution-of-machine-learning/

The Exciting Evolution of Machine Learning

The progression of machine learning

Machine Learning – ML: It may sound a gold mine to many businesses especially for the companies which are actually data factories i.e social media platforms.

Sadly the current version of machine learning as used in the industry is extremely limited and dedicated to the completion of mundane tasks.

The real need is to clarify, demonstrate, extract real values and reap rewards  this buzz words in the real business world which is missing big time.

In this blog post, I am attempting to create a short text movie of machine learning timelines to give a high-level view on machine learning evolution.

For Basics around Machine Learning please read this.

Machine Learning and AI

Artificial intelligence and machine learning are used interchangeably often but they are not the same. Machine learning is one of the most active areas and a way to achieve AI. Why ML is so good today; for this, there are a couple of reasons below but not limited to though.

  • The explosion of big data
  • Hunger for new business and revenue streams in this business shrinking times
  • Advancements in machine learning algorithms
  • Development of extremely powerful machine with high capacity & faster computing ability
  • Storage capacity

Today’s machines are learning and performing tasks; that was only be done by humans in the past making a better judgement, decisions, playing games etc.

This is possible because machines can now analyse and read through patterns and remember learnings for future use.

Today the major problem is to find resources that are skilled enough to demonstrate & differentiate their learning from university & PhD books in real business rather than just arguing on social media with others.

Machine learning should be treated as a culture in an organisation where business teams, managers and executives should have some basic knowledge of this technology.

In order to achieve this as a culture, there have to be continuous programs and road shows for them.

There are many courses which are designed for students, employees with little or no experience, managers, professionals and executive to give them a better understanding of how to harness this magnificent technology in their business.

Machine Learning – The Journey of Automation

In machine learning environment or setup, various algorithms and services are managed between the actual source of the data and the learning platform which can be on the cloud.

ML of today in all cases does its work on centralised infrastructure. Though there are some success stories where it also runs exceptionally well on a distributed infrastructure.

 These methods may/may not be at the most efficient but for now, they work well.

Machine learning has evolved from Artificial Intelligence subset to its own domain (not fully though). It has reached to it an inflexion point – at least in terms of messaging.

I remember in my school days as part of statistics class we were told something about AI and ML and we laughed then in the 1990s.

Blockchain, quantum computing and wireless connectivity to support IoT on 4G/5G (6G is not very far though) are now norms, not fascinating techs.

The evolution has happened from Turing machines to current very highly intelligent robots. How much machine learning has come far from its origin, difficult to judge and measure but results are clear and visible though. In recent years, the term ‘machine learning‘ has become very popular among developers and business a, even though research in the field has been going on for 5 decades plus.

AILabPage defines machine learning is a simple and easy manner as above. The classic example of ML which we use knowingly or unknowingly on an almost daily basis

  • Managing our email box. Each time clicking the “not junk” or “junk” button on a miscategorised email, the machine learns a little bit more.
  • Using Siri, Cortana or Google Assistant on smartphones.

These tools become more efficient at usage, actions we take or at each interaction with these technologies. Every time the machine learns from its mistake and adjusts the attributes e.g in case of email system it looks at each email accordingly. In short, the more we torture the data the better the algorithm gets in machine learning.

Machine Learning Approach

The approach of developing ML includes learning from data inputs “What has happened”. Evaluating and optimizing different model results remains focus here.

As on date Machine Learning is widely used in data analytics as a method to develop algorithms for making predictions on data. It is related to probability, statistics, and linear algebra.

Point to not its not just regression or a fancy version of regression.

Machine Learning is classified into three categories at a high level depending on the nature of the learning and learning system.

  1. Supervised learning: Machine gets labelled inputs and their desired outputs. The goal is to learn a general rule to map inputs to the output.
  2. Unsupervised learning: Machine gets inputs without desired outputs, the goal is to find structure in inputs.
  3. Reinforcement learning: In this algorithm interacts with a dynamic environment, and it must perform a certain goal without a guide or teacher.

Today’s machines are learning and performing tasks; that was only be done by humans in the past making a better judgement, decisions, playing games etc.

This is possible because machines can now analyse and read through patterns and remember learnings for future use.

Today the major problem is to find resources that are skilled enough to demonstrate & differentiate their learning from university & PhD books in real business rather than just arguing on social media with others.

Machine Learning’s Primary Goal

AILabPage defines machine learning as “A focal point where business, data, experience meets emerging technology and decides to work together”. If you have not unfolded machine learning jargon already then please take look on our machine learning post series library.

Machine learning is the process of a machine attempting to accomplish a task, independent of human intervention, more efficiently and more effectively with every passing attempt i.e learning phase.

At this point, AI- a machine which mimics the human mind, is still a pipe dream.

In the middle, we have the meat of the pipeline, the model, which is the machine learning algorithm that learns to predict given input data.

Machine Learning is a subset of artificial intelligence where computer algorithms are used to autonomously learn from data and information.

Machine Learning, however, has been a reality in our lives for quite some time on the other hand Deep learning is a subcategory of machine learning algorithms that use multi-layered neural networks to learn complex relationships between inputs and outputs.

The most basic rule and meaningful point to understand anything about machine learning are; quality of data it gets served with. So this means its purely driven by data sets to provide answers. It follows the GIGO method. The set of algorithms are as important as underlying data sets.

Machine Learning Evolution

Some of the evolutions, which made a huge positive impact in real-world problem solving, are highlighted in the above picture with title Machine Learning and some use cases There are success stories where organisations have made remarkable progress and value-adds for business with each type of learning. Making the right choice on which ml techniques to be used for a particular business problem requires experience and a thorough understanding.

Each type of machine learning provides a strategic and competitive advantage but the availability of quality data basis which technique is chosen is far more important.

Types of machine learning algorithms and which one to be used when is extremely important to know.

The goal of any machine learning task and all the things that are being done in the field that puts you in a better position is to break down a real problem in design form for machine learning systems.

ML develop its own encompassing strategy from the experience it comes across over the period of time. Sir Andrew Ng says AI is a new form of electricity so, in my opinion, ML is no different; This has been one of the most active and rewarding areas of research due to its widespread use in many areas. Some of the major breakthroughs in ML include Natural Language Processing and DeepLearning.

Points to Note:

All credits if any remains on the original contributor only. We have covered all basics around Machine Learning. Machine Learning is all about data, computing power and algorithms to look for information. In the upcoming post, we will cover new type machine learning task under neural network Generative Adversarial Networks. A family of artificial neural networks.

Books + Other readings Referred

  • Research through open internet, news portals, white papers and imparted knowledge via live conferences & lectures.
  • Lab and hands-on experience of  @AILabPage (Self-taught learners group) members.

Feedback & Further Question

Do you have any questions about Supervised Learning or Machine Learning? Leave a comment or ask your question via email. Will try my best to answer it.

Conclusion – Machine Learning is the branch of computer science, un statistics which is a graphical branch of mathematics. Though its not entirely true but up to some extent Statistics can be called as graphical branch of mathematics (In my personal opinion).

ML deals with the development of computer algorithms that learn and grow themselves. Machine Learning can be very specific task-oriented as well. In a specific environment, a machine can be forced to learn just the rules of a game.

It had to try out millions of different strategies for each situation in this case before it prepares self to play against its opponent. For example a game of chess or backgammon.

#MachineLearning #DeepLearning #ArtificialIntelligence #ArtificialNeuralNetworks

================ About the Author ===================

Read about Author at : About Me

Thank you all, for spending your time reading this post. Please share your feedback / comments / critics / agreements or disagreement. Remark for more details about posts, subjects and relevance please read the disclaimer.

Page                 ContactMe                            =============================================================

Источник: https://vinodsblog.com/2018/03/11/the-exciting-evolution-of-machine-learning/

Wearable Monitoring and Interpretable Machine Learning Can Objectively Track Progression in Patients during Cardiac Rehabilitation

The progression of machine learning

Open AccessArticle

by 1,2,*, 3, 1,2,3, 1,2, 4,5, 4,6, 4, 3 and 1,2,7

1

Mobile Health Unit, Faculty of Medicine and Life Sciences, Hasselt University, 3500 Hasselt, Belgium

2

Future Health Department, Ziekenhuis Oost-Limburg, 3600 Genk, Belgium

3

imec the Netherlands/Holst Centre, 5656AE Eindhoven, The Netherlands

4

KU Leuven, Department of Electrical Engineering (ESAT), STADIUS Center for Dynamical Systems, Signal Processing and Data Analytics, 3001 Leuven, Belgium

5

TU Delft, Department of Microelectronics, Circuits and Systems (CAS), 2600AA Delft, The Netherlands

6

imec vzw Belgium, 3001 Leuven, Belgium

7

Department of Cardiology, Ziekenhuis Oost-Limburg, 3600 Genk, Belgium

*

Author to whom correspondence should be addressed.

Sensors 2020, 20(12), 3601; https://doi.org/10.3390/s20123601

Received: 28 April 2020 / Revised: 12 June 2020 / Accepted: 22 June 2020 / Published: 26 June 2020

Cardiovascular diseases (CVD) are often characterized by their multifactorial complexity. This makes remote monitoring and ambulatory cardiac rehabilitation (CR) therapy challenging. Current wearable multimodal devices enable remote monitoring. Machine learning (ML) and artificial intelligence (AI) can help in tackling multifaceted datasets. However, for clinical acceptance, easy interpretability of the AI models is crucial. The goal of the present study was to investigate whether a multi-parameter sensor could be used during a standardized activity test to interpret functional capacity in the longitudinal follow-up of CR patients. A total of 129 patients were followed for 3 months during CR using 6-min walking tests (6MWT) equipped with a wearable ECG and accelerometer device. Functional capacity was assessed 6MWT distance (6MWD). Linear and nonlinear interpretable models were explored to predict 6MWD. The t-distributed stochastic neighboring embedding (t-SNE) technique was exploited to embed and visualize high dimensional data. The performance of support vector machine (SVM) models, combining different features and using different kernel types, to predict functional capacity was evaluated. The SVM model, using chronotropic response and effort as input features, showed a mean absolute error of 42.8 m (±36.8 m). The 3D-maps derived using the t-SNE technique visualized the relationship between sensor-derived biomarkers and functional capacity, which enables tracking of the evolution of patients throughout the CR program. The current study showed that wearable monitoring combined with interpretable ML can objectively track clinical progression in a CR population. These results pave the road towards ambulatory CR.View Full-Text

Keywords: wearable sensor; machine learning; physical fitness assessment; cardiac rehabilitation; patient progression monitoring wearable sensor; machine learning; physical fitness assessment; cardiac rehabilitation; patient progression monitoring

►▼Show Figures

div data-cycle-log=false>

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited

Share and Cite

MDPI and ACS Style

De Cannière, H.; Corradi, F.; Smeets, C.J.P.; Schoutteten, M.; Varon, C.; Van Hoof, C.; Van Huffel, S.; Groenendaal, W.; Vandervoort, P. Wearable Monitoring and Interpretable Machine Learning Can Objectively Track Progression in Patients during Cardiac Rehabilitation. Sensors 2020, 20, 3601.https://doi.org/10.3390/s20123601

AMA Style

De Cannière H, Corradi F, Smeets CJP, Schoutteten M, Varon C, Van Hoof C, Van Huffel S, Groenendaal W, Vandervoort P. Wearable Monitoring and Interpretable Machine Learning Can Objectively Track Progression in Patients during Cardiac Rehabilitation. Sensors. 2020; 20(12):3601.https://doi.org/10.3390/s20123601

Chicago/Turabian Style

De Cannière, Hélène; Corradi, Federico; Smeets, Christophe J.P.; Schoutteten, Melanie; Varon, Carolina; Van Hoof, Chris; Van Huffel, Sabine; Groenendaal, Willemijn; Vandervoort, Pieter. 2020. “Wearable Monitoring and Interpretable Machine Learning Can Objectively Track Progression in Patients during Cardiac Rehabilitation” Sensors 20, no. 12: 3601.https://doi.org/10.3390/s20123601

Article Metrics

Источник: https://www.mdpi.com/1424-8220/20/12/3601

The Progression of AI: from Supervised to Unsupervised Learning

The progression of machine learning

Undoubtedly, artificial intelligence has come a long way in the last decade, both in terms of practical applications and research possibilities.

Today, AI can win over humans in the games of Go, Dota 2, jeopardy, chess, and occasionally poker.

It can distinguish between detailed images of human faces, solve mathematical problems, predict human behavior, write poetry, create computer code, translate from virtually any language in the world, and much more.

The power of AI is largely its internal engine: machine learning (ML) algorithms. ML can predict the future the past and is basically an advanced form of statistical probability calculation.

Machine learning: the essence of AI

Machine learning (ML) isn’t a new concept; the first, albeit simple, machine learning algorithms were first introduced in the 1950s. Then, for the next 30 years or so there was a hiatus in its development, caused mostly by the lack of significant progress made in the field.

Finally, in the 90s, researchers began creating more sophisticated ML algorithms, which allowed computers to analyze heaps of data and learn from them.

That is when neural networks became significant as well, though, it wasn’t until the early 2000s when unsupervised learning started gaining popularity.

These advances in AI were largely due to progress made in machine learning algorithms, with the most striking bit being made by unsupervised learning vs. supervised learning. We’ll find out why below.

Supervised learning

With this type of machine learning, we can predict future events past trends but only using pre-defined models and pre-labeled data sets.

In other words, the algorithm can help us compare data and categorize it, but only if we tell it what we are looking for in the first place.

Certainly, supervised learning has limitations, as researchers need to define data in advance, or the findings won’t be that useful.

Unsupervised learning

Un supervised learning, unsupervised learning algorithms can identify existing structures in the base data, without humans having to define those first.

This is very useful when we are working with unstructured data and wanting to explore patterns that we may not be aware of.

Unsupervised learning is much more helpful than supervised learning when it comes to uncovering similarities, unusual events or anomalies in data sets and clustering data that is, on the surface, completely disconnected.

Applications of unsupervised and supervised learning

Both types of machine learning have wide applications in business analytics, business intelligence, bioinformatics, spam detection, image, object and speech recognition and segmentation, genetic clustering, pattern and sequence mining, and more.

Unsupervised learning has greater utility when it comes to more abstract purposes. It can be used, for example, in detecting fraudulent transactions and climate change abnormalities. The ability to discover unknown trends is invaluable and has countless applications in many industries.

The future of supervised and unsupervised learning

Both supervised and unsupervised learning have shortcomings, mostly having to do with the ability to process and draw meaningful conclusions complex data, and this is where the more advanced deep learning technologies come in. Unfortunately, public investment in deep learning is still lacking.  

Most of Big Tech today is investing in custom AI and internal machine learning tools that can make sense of virtually any kind of data, many of them incorporating deep-learning technology. For smaller businesses that don’t have the budget and capacity to invest in AI exploration, third-party ML platforms that integrate with existing business systems are starting to show potential.

Introducing AI-powered apps to your business operations

Being able to predict customer or user behavior can present businesses with a clear advantage.

AI can help you do just that, by analyzing previous purchase patterns and history, product ratings and reviews, and search history, for example.

Marketing departments are starting to utilize such tools and gain insights from the terabytes of data that are being collected on prospects and customers at every turn.

If supervised learning could predict how often you should re-target customers for the maximum effectiveness of your marketing campaigns, then unsupervised learning would tell you the most effective marketing strategy for each type of customer.

The benefit of the latter is evident: the ability to tailor your marketing and advertising efforts to each customer is the most effective strategy there is, allowing you to put your euros where they count most.

As AI becomes better at recognizing patterns, the use cases for unsupervised learning will continue growing.

Although AI isn’t yet useful in all business operations and challenges, the areas of marketing and advertising, transport and logistics, supply chain, dynamic pricing, demand planning, and accounting/finance can immediately benefit from custom solutions machine learning.

If you’d help in exploring your options, Contact the Pegus team of experts

Copywriter inadanova.com

Источник: https://pegus.digital/the-progression-of-ai-from-supervised-to-unsupervised-learning/

Review of Machine Learning 2020. We made a lot of progress this year in…

The progression of machine learning
Dec 15, 2020 · 7 min read

We made a lot of progress this year in the field of Machine Learning. The proliferation of AI-based solutions among non-tech companies is getting closer and closer.

To be honest, I think we’ve accomplished a lot, but we still have a long way to go. We have to regulate this field and can’t let it loose because it would lead to a more diverged and unequal world rather than a better one.

Let’s recap what was 2020 all about in the field of Machine Learning and Data Science.

Interpretable Machine Learning / Explainable AI

From development aspects, we used to call the machine learning models black-box and even didn’t wanted to try to explain its working mechanism.

We just laid back and waited that the business will just simply trust in us and accept these black-box models. This is simply not what the business wants.

This is not enough for the people who will use these solutions and have to deal with the mistakes which they made. Tho models never have to deal with the real-life consequences of their decisions, people have.

Machine Learning workflow (Source:Jamine Zorzona)

If the predictions come without any justification the users have to blindly trust in the model. Although if people understand why the model is saying what it is saying that could boost the trust in it. This field got a lot of attention this year, and this will hopefully continue in 2021. The reasons why this field becomes one of the most important is that :

  1. The digital transformation is still in progress at most of the big companies. They are just exploring the newest tech solutions, to adopt a system which they did not understand and rely on that that is something which not gonna work. Understanding what happens when a ML model makes its prediction will definitely speed up the widespread of these systems.
  2. Not just because of the “right to explanation” article by the EU which is tend to regulate the automated decision-making process but buy now there are some sectors such as banking and insurance where is a must that the models which they use have to be explainable.
  3. The trust in the models is more important in the field of medicine and healthcare where these systems can make a real impact on humankind. Providing more information not just the result to the people can increase their trust in the prediction.

This list could contain many more examples, but I guess you got the point that this field is extremely important and we just can’t ignore it anymore if we want to make progress and spread the so-called “AI-driven solutions” in the business.

Automated Machine Learning

AutoML is getting so much attention this year, several companies are working on their own solutions. I personally have some concerns about it, and that’s not because of the fear that ohh wow automated machine learning will take my job. Simply because of that, the users of the AutoML systems are not the ones who should be.

AutoML is a great field to automate the modeling process and generate new features for us, do the data preprocessing phase, select the model, and tune its hyperparameters.

Automated machine learning (Source:Deni Vorotyntsev)

That’s nice and good so far, but Auto ML is or should be a tool not a replacement of a Data Science team.

With AutoML tools, you can save time and automate the modeling process as a data scientist and relatively fast receive results that can be presented. But it definitely doesn’t manage to do the whole CRISP-DM process just one step from it.

That step is, by the way, one of the most important but without context, we will receive just a model not a solution for a problem.

After the hating part, we should acknowledge that both 3 big cloud providers( AWS, GCP, and Azure) making decent progress with their AutoML solution.

The leader of this competition is currently H2O AutoML which not only provides the best accuracy but became so popular among the AutoML users due to its easy to use feature. We should also mention AutoKeras which first official release was early this year.

It relies on the popular deep learning libraries, Keras and Tensorflow. Accuracy is not the only key metric which we should track, scalability, flexibility, and transparency are almost as important as the first one.

Full Stack Data Scientists

Full stack developers are all around us for several years in the world of web development. It was just a matter of time when these special species will evolve for the data field.

By now data scientists already know what to use and how to use, CNN models for computer vision, tree based methods for tabular data, and which transformers are suggested for NLP problems. So many state-of-the-art models are out now we just have to know how to use them.

This means for data science projects the data preprocessing and modeling are not the hardest part anymore.

The main challenges for the data science teams are deploying and maintaining the models in production. Therefore MLOps becomes more and more important and software engineering and DevOps skills are highly appreciated for a data scientist.

Creating a good model that runs only locally is not enough anymore building an end-to-end system that includes dockerizing the solution and operating it on-premise or in the cloud is a more reasonable expectation from a data scientist.

AI pioneers

To be honest, this paragraph would be totally different if I wrote it in November, probably I would mention DeepMind in one sentence just to establish the praise of OpenAI which created the GPT-3 model, a mindblowing language model with a lot of capabilities. But since then it turned out that the newest AI tool from DeepMind the AlphaFold is making a scientific breakthrough.

AlphaFold
AlphaFold can accurately predict 3D models of protein structures and has the potential to accelerate research in every field of biology. Firstly I thought hmm that’s not bad but nothing special at all.

When I dig into this topic I just realized how big it actually is. AlphaFold can accurately predict the protein’s shape from its sequence of amino acids. There are more than 200 million proteins that build up from the combination of 20 different types of amino acids.

Until now the scientist revealed just the fragment of the 3D protein models. AlphaFold performs over 90% match in the global distance test at protein folding, which means they solved the problem of protein folding.

This huge achievement doesn’t make such a great impact directly onto our life but it can accelerate the research progress in so many areas

GPT-3
The newest generation of OpenAI’s language prediction models. The previous one the GPT-2 was fully released just 6 months after its original release due to concerns about it. It is so powerful that it could easily write fake news, fill emails, and so on.

This model contains 1,5 billion parameters which is just a tiny amount compared to the newest generation GPT-3 which contains 175 billion parameters. The quality of the texts generated by GPT-3 is so perfect that distinguish them from the human-written text is nearly impossible.

GPT-3 can create anything that has a language structure which means it can answer questions, write essays, summarize long texts, translate languages, take memos, and even create computer code.

GPT-3 as well as their ancestors is a pre-trained model so the user can feed the text as an input to the model which generates the output for it. To be able to perform at such a high-level OpenAI has to spend about 4.6 million dollars to train the model.

The result is fascinating but also so powerful that it’s not open yet for the average person so to get access to it you should request it from OpenAI and join to its waiting list. Once it will be released Microsoft will operate it on Azure.

Summary

2020 was an interesting year from so many aspects I think 2021 will bring several new and exciting topics to us. The need for explainable AI will be more urgent, and the rise of the full stack data scientist will become and there will be more attention on MLOps than ever.

I am pretty curious about the GPT-3 API and can’t wait to use it. I also think that to feel the actual result of the huge achievement by AlphaFold is a couple of years far from us.

So this was my yearly recap about the most interesting topics from the field of Machine Learning and data science

“,”author”:”Kristóf Szalóki”,”date_published”:”2020-12-17T12:01:53.893Z”,”lead_image_url”:”https://miro.medium.com/max/828/1*QIpBgrN3e8ymgVLaou-HHw.png”,”dek”:null,”next_page_url”:null,”url”:”https://medium.com/swlh/yearly-review-of-machine-learning-2020-81d71a112dac”,”domain”:”medium.com”,”excerpt”:”We made a lot of progress this year in the field of Machine Learning. To be honest, I think we’ve accomplished a lot, but we still have a…”,”word_count”:1416,”direction”:”ltr”,”total_pages”:1,”rendered_pages”:1}

Источник: https://medium.com/swlh/yearly-review-of-machine-learning-2020-81d71a112dac

NEWS
Leave a Reply

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: