Take yourself back to February 2020. Life was relatively normal, kids were at school, we physically went into work, and everyone was more certain of the paths they were on. A year later, people of all ages are now a lot more tech savvy, having been forced to work-from-home, do online schooling or have online gatherings, just to keep in touch with loved ones. We have had to embrace the change, and step out of our comfort zones, learning how to use technology to navigate everyday life. While it’s true that South Africa is still behind in digitization, it’s catching up fast thanks to COVID-19, catalyzed by boardrooms across the country focusing on digitization like never before. One such focus is the efficiency driven by Artificial Intelligence and Machine Learning (AI/ML). SafriCloud surveyed SA’s leading IT decision makers to assess the sentiment and adoption outlook for these technologies amongst business and IT professionals. The results have been published in an eye-opening report entitled, ‘AI: SA – The state of AI in South African businesses 2021’. ‘Keen to start but facing a few challenges’ was the pervasive theme across the survey respondents, but with the global Machine Learning market projected to grow from $7.3 billion in 2020 to $30.6 billion by 2024*, why do we still see resistance to adoption? Nearly 60% of respondents said that their business supports them in their desire to implement AI/ML and yet only 25% believed that it is understood well at an executive level. While ‘fear of the unknown’ ranked in the top three adoption challenges both locally and internationally (Gartner, 2020), only 9.34% of respondents cited ‘lack of support from C-suite’ as a challenge. There is a clear degree of pessimism to the level of skills and knowledge to be found in the South African market. This pessimism is more exaggerated at a senior management level where more than 60% rated ‘low internal skill levels’ as the top challenge facing AI/ML adoption. With nearly 60% of the respondents rating the need to implement AI/ML in the next two years as ‘important’ to ‘very important’ and only 35% of businesses saying they currently have internal resources focused on AI/ML, the skills gap will continue to grow. Artificial Intelligence and Machine Learning represent a new frontier in business. Like previous generations that faced new frontiers – such as personal computing and the industrial revolution – we can’t predict what these changes might lead to. All we can really say is that business will be different, jobs will be different and how we think will be different. Those open to being different will be the ones that succeed. Get free and instant access to the full report, to discover whether your business is leading the way or falling behind: https://www.safricloud.com/ai-sa-the-state-of-ai-in-south-african-businesses/ Report highlights include: The areas of AI/ML that are focused on the most. The state of the AI job market and how to hire. Practical steps to train and pilot AI/ML projects.
Ethiopian Airlines
Improves service deliveryand monetizes website window shoppers Unrivaled in Africa for efficiency and operational success, Ethiopian Airlines serves 127 international and 22 domestic destinations. Like many airlines, the company is always looking to lower costs and improve margins. Growing direct sales by capturing every potential booking from callers and website visitors is vital to meeting those goals. However, the airline’s contact center struggled with incompatible systems and information islands. Calls were routed to agents without taking into account language skills or competencies. That raised abandon rates, transfers and hand offs. Teams worked in silos using email and chat. There was no CRM system or workforce management; data resided on a central booking system or was buried elsewhere. The company lacked a full overview of the customer journey and real-time insight into conversations and preferences. The first step in the transformation was to replace an externally hosted contact center solution with a strong omnichannel platform that the company could manage internally and use to drive improvements and business growth. Live after two months, the Genesys Cloud™ contact center allows up to 500 agents to work more productively in a blended fashion, effortlessly switching between calls, email and chat conversations — all managed from a single omnichannel desktop. Introducing Genesys Workforce Management further improved the customer experience. As a result, Ethiopian Airlines has seen service levels soar from 70% to 95%, with higher first-call resolution and sizeable reductions in abandoned calls (from 20% to 3%). Call-answer times have dropped from 20 to 8 seconds. With two weeks of implementing Genesys Predictive Engagement, the airline not only gained insights about website journeys, it also leveraged artificial intelligence (AI) and analytics to uncover behaviors and interests of visitors. This allowed the company to offer tailored deals through webchat. Ethiopian Airlines also can engage customers through the website with attractive travel packages that were created as a result of tracking real-time statistics and data. Benefits 25% increase in service levels 60% faster call response 17% fewer abandoned calls 49% increase in website sales conversions 72% reduction in website dwell time Effective pandemic response without adding headcount Future roadmap for mobile and AI integration “Genesys Predictive Engagement is enabling us to capture significantly more window shoppers on our website. Conversion rates rose by 14% in the first two weeks and by 49% at the six-week stage. And, we’ve only really scratched the surface of what the tool can do.” Getinet Tadesse, CIO, Ethiopian Airlines Download AI success stories ebook https://www.genesys.com/resources/improve-customer-satisfaction-sales-and-workforce-engagement-with-genesys-blended-ai
Artificial Intelligence and Machine Learning for robust Cyber Security
Machine Learning Africa recently partnered with Darktrace to present a webinar on Leveraging AI & Machine Learning for robust Cybersecurity. Topic: Leveraging Artificial Intelligence and Machine Learning in building robust Cyber security solutions. The adoption of emerging technologies comes with increasing cybersecurity risks. AI and ML can be used to detect and analyze cyber-security threats effectively at an early stage. Warren Mayer, Alliances Director for Africa at Darktrace, provided invaluable insight on the importance of self-learning and self-defending networks in mitigating cyber security risks. WATCH THE WEBINAR ON DEMAND
How is Coding Used in Data Science & Analytics
What is Data Science? In recent years the phrase “data science” has become a buzzword in the tech industry. The demand for data scientists has surged since the late 1990s, presenting new job opportunities and research areas for computer scientists. Before we delve into the computer science aspect of data science, it’s useful to know exactly what data science is and to explore the skills required to become a successful data scientist. Data science is a field of study that involves the processing of large sets of data with statistical methods to extract trends, patterns, or other relevant information. In short, data science encapsulates anything related to obtaining insights, trends, or any other valuable information from data. The foundations of these tasks originate from the fields of statistics, programming, and visualization. In short, a successful data scientist has in-depth knowledge in these four pillars: Math and Statistics: From modeling to experimental design, encountering something math-related is inevitable, as data almost always requires quantitative analysis. Programming and Database: Knowing how to navigate program data hierarchies, or big data, and query certain datasets alongside knowing how to code algorithms and develop models is invaluable to a data scientist (more on this below). Domain Knowledge and Soft Skills: A successful and effective data scientist is knowledgeable about the company or firm at which they are working and proactive at strategizing and/or creating innovative solutions to data issues. Communication and Visualization: To make their work viable for all audiences, data scientists must be able to weave a coherent and impactful story through visuals and facts to convey the importance of their work. This is usually completed with certain programming languages or data visualization software, such as Tableau or Excel. Does Data Science Require Coding? Short answer: yes. As described in points 2 and 4, coding plays a significant role in data science, making appearances in almost every step of the process. Though, how is coding utilized in every step of solving a data science problem? Below, you’ll find the different stages of a typical data science experiment and a detailed account of how coding is integrated within the process. It’s important to remember that this process is not always linear; data scientists tend to ping-pong back and forth between different steps depending on the nature of the problem at hand. Preplanning and Experimental Design Before coding anything, it’s necessary for data scientists to understand the problem that is being solved and the desired objective. This step also requires data scientists to figure out which tools, software, and data be used throughout the process. Although coding is not involved in this phase, it can’t be skipped, as it allows a data scientist to keep his or her focus on their objective and not let white noise or unrelated data or results to distract. Obtaining Data The world has a massive amount of data that is growing constantly. In fact, Forbes reports that humans create 2.5 quintillion bytes of data daily. From such vast amounts of data arise vast amounts of data quality issues. These issues can be anything, ranging from duplicate or missing datasets and values, inconsistent data, misentered data, or even outdated data. Obtaining relevant and comprehensive datasets is tedious and difficult. Oftentimes, data scientists use multiple datasets, pulling the data they need from each one. This step requires coding with querying languages, such as SQL and NoSQL. Cleaning Data After all the necessary data is compiled in one location, the data needs to be cleaned. For example, data which is inconsistently labeled “doctor” or “Dr.” can cause problems when it is analyzed. Labeling errors, minor spelling mistakes, and other minutiae can cause major problems along the road. Data scientists can use languages like Python and R to clean data. They can also use applications, such as OpenRefine or Trifecta Wrangler, which are specifically made to clean data and transform it into different formats. Analyzing Data Once a dataset is clean and uniformly formatted, it is ready to be analyzed. Data analytics is a broad term with definitions that differ from application to application. When it comes to data analysis, Python is ubiquitous in the data science community. R and MATLAB are popular as well, as they were created to be used in data analysis. Though these languages have a steeper learning curve than Python, they are useful for an aspiring data scientist, as they are so widely used. Beyond these languages, there are a plethora of tools available online to help expedite and streamline data analysis. Visualizing Data Visualizing the results of data analysis helps data scientists convey the importance of their work as well as their findings. This can be done done using graphs, charts, and other easy-to-read visuals, which can allow broader audiences to understand a data scientist’s work. Python is a commonly used language for this step; packages such as seaborn and prettyplotlib can help data scientists make visuals. Other software, such as Tableau and Excel, are also readily available and are widely used to create graphics. Programming Languages used in Data Science Python is a household name in data science. It can be used to obtain, clean, analyze, and visualize data, and is often considered the programming language that serves as the foundation of data science. In fact, 40% of data scientists who responded to an O’Reilly survey claimed they used Python as their main coding language. The language has contributors that have created libraries solely dedicated to data science operations and extensions into artificial intelligence/machine learning, making it an ideal choice. Common packages, such as numpy and pandas, can compute complex calculations with matrices of data, making it easier for data scientists to focus on solutions instead of mathematical formulas and algorithms. Even though these packages (along with others, such as sklearn) already take care of the mathematical formulas and calculations, it’s still important to have a solid understanding of said concepts in order to implement the correct procedure through code. Beyond these foundational packages, Python also
Fear of the Unknown: Artificial Intelligence
Artificial Intelligence (AI) will be the most popular and developed technological trend in 2020 with a market value projected to reach $70 billion. AI is impacting several areas of knowledge and business, from the entertainment sector to the medical field where AI is utilizing high-precision algorithms through machine learning that can produce more accurate diagnoses and detect symptoms of serious diseases at a much earlier stage. The innovation that AI offers to industry, businesses, and consumers is positively changing all processes. The new decade will be driven by the rise of automation and AI-induced robotics. However, there is a huge exaggeration and hysteria about the future of Artificial Intelligence and how humans will need to adapt and get used to living with it. In fact, AI is a topic that has polarized popular opinion. What is true is that AI will become the core of everything that humans interact within the coming years, and beyond. Hence, to have a clear opinion about AI and its impact, it is important to understand what it is and what are the types of artificial intelligence that exist. General Artificial Intelligence (AGI) is the type of AI that can perform any cognitive function in the way a human does. The technology is not there yet but it is developing at a fast pace and there are interesting AI projects such as Elon Musk’s Neuralink. Today, narrow AI applications, intended to develop only one task, such as IBM Watson, Siri, Alexa, Cortana, and others are the ones that share the world with us. The key difference between the AGI or wide artificial intelligence and the narrow or weak AI is the goal setting and the volition. In the future, AGI will have the ability to reflect on its own objectives and decide whether to adjust them or not and to what extent. We have to admit that, if done right, this extraordinary technological achievement will change humanity forever. However, there is still a long way to go to get to that point. Despite this, many fear that Super Artificial Intelligence (ASI) will one day go beyond human cognition, also known as the technological singularity. At the moment, in society, there are two emerging and visible groups: on the one hand, the public is informed- in this group, trust towards new and emerging technologies has been increasing over time. On the other hand, there is the mass population -a group where trust remains stagnant. Of course, social networks also play a role here. It’s not just about consumption, but about amplification, with people who share news more than ever and discuss issues relevant to them. Confidence used to be from top to bottom, but now it is established horizontally from equal to equal. Will AI benefit or destroy society? AI can only become what humans want it to become. Humans have the task of coding their AI creations. If the mass population is increasingly anxious about AI, this is due to fear of the unknown. Perhaps it is also because there is very little information available about the benefits AI offers to balance with those who believe that AI will destroy society and take away their jobs. For now, AI has only been providing great benefits and its coverage in the medium term can only benefit and optimize many areas of human activity.
Python vs. Java: Uses, Performance, Learning
In the world of computer science, there are many programming languages, and no single language is superior to another. In other words, each language is best suited to solve certain problems, and in fact there is often no one best language to choose for a given programming project. For this reason, it is important for students who wish to develop software or to solve interesting problems through code to have strong computer science fundamentals that will apply across any programming language. Programming languages tend to share certain characteristics in how they function, for example in the way they deal with memory usage or how heavily they use objects. Students will start seeing these patterns as they are exposed to more languages. This article will focus primarily on Python versus Java, which are two of the most widely used programming languages in the world. While it is hard to measure exactly the rate at which each programming language is growing, these are two of the most popular programming languages used in industry today. One major difference between Python and Java is that Python is dynamically typed, while Java is statically typed. Loosely, this means that Java is much more strict about how variables are defined and used in code. As a result, Java tends to be more verbose in its syntax, which is one of the reasons we recommend learning Python before Java for beginners. For example, here is how you would create a variable named numbers that holds the numbers 0 through 9 in Python: numbers = [] for i in range(10): numbers.append(i) Here’s how you would do the same thing in Java: ArrayList numbers = new ArrayList(); for (int i = 0; i < 10; i++) { numbers.add(i); } Another major difference is that Java generally runs programs more quickly than Python, as it is a compiled language. This means that before a program is actually run, the compiler translates the Java code into machine-level code. By contrast, Python is an interpreted language, meaning there is no compile step. Usage and Practicality Historically, Java has been the more popular language in part due to its lengthy legacy. However, Python is rapidly gaining ground. According to Github’s State of the Octoberst Report, it has recently surpassed Java as the most widely used programming language. As per the 2018 developer survey, Python is now the fastest-growing computer programing language. Both Python and Java have large communities of developers to answer questions on websites like Stack Overflow. As you can see from Stack Overflow trends, Python surpassed Java in terms the percentage of questions asked about it on Stack Overflow in 2017. At the time of writing, about 13% of the questions on Stack Overflow are tagged with Python, while about 8% are tagged with Java! Web Development Python and Java can both be used for backend web development. Typically developers will use the Django and Flask frameworks for Python and Spring for Java. Python is known for its code readability, meaning Python code is clean, readable, and concise. Python also has a large, comprehensive set of modules, packages, and libraries that exist beyond its standard library, developed by the community of Python enthusiasts. Java has a similar ecosystem, although perhaps to a lesser extent. Mobile App Development In terms of mobile app development, Java dominates the field, as it is the primary langauge used for building Android apps and games. Thanks to the aforementioned tailored libraries, developers have the option to write Android apps by leveraging robust frameworks and development tools built specifically for the operating system. Currently, Python is not used commonly for mobile development, although there are tools like Kivy and BeeWare that allow you to write code once and deploy apps across Windows, OS X, iOS, and Android. Machine Learning and Big Data Conversely, in the world of machine learning and data science, Python is the most popular language. Python is often used for big data, scientific computing, and artificial intelligence (A.I.) projects. The vast majority of data scientists and machine learning programmers opt for Python over Java while working on projects that involve sentiment analysis. At the same time, it is important to note that many machine learning programmers may choose to use Java while they work on projects related to network security, cyber attack prevention, and fraud detection. Where to Start When it comes to learning the foundations of programming, many studies have concluded that it is easier to learn Python over Java, due to Python’s simple and intuitive syntax, as seen in the earlier example. Java programs often have more boilerplate code – sections of code that have to be included in many places with little or no alteration – than Python. That being said, there are some notable advantages to Java, in particular its speed as a compiled language. Learning both Python and Java will give students exposure to two languages that lay their foundation on similar computer science concepts, yet differ in educational ways. Overall, it is clear that both Python and Java are powerful programming languages in practice, and it would be advisable for any aspiring software developer to learn both languages proficiently. Programmers should compare Python and Java based on the specific needs of each software development project, as opposed to simply learning the one language that they prefer. In short, neither language is superior to another, and programmers should aim to have both in their coding experience. Python Java Runtime Performance Winner Ease of Learning Winner Practical Agility Tie Tie Mobile App Development Winner Big Data Winner This article originally appeared on junilearning.com
5 Key Challenges In Today’s Era of Big Data
Digital transformation will create trillions of dollars of value. While estimates vary, the World Economic Forum in 2016 estimated an increase in $100 trillion in global business and social value by 2030. Due to AI, PwC has estimated an increase of $15.7 trillion and McKinsey has estimated an increase of $13 trillion in annual global GDP by 2030. We are currently in the middle of an AI renaissance, driven by big data and breakthroughs in machine learning and deep learning. These breakthroughs offer opportunities and challenges to companies depending on the speed at which they adapt to these changes. Modern enterprises face 5 key challenges in today’s era of big data 1. Handling a multiplicity of enterprise source systems The average Fortune 500 enterprise has a few hundred enterprise IT systems, all with their different data formats, mismatched references across data sources, and duplication 2. Incorporating and contextualising high frequency data The challenge gets significantly harder with increase in sensoring, resulting inflows of real time data. For example, readings of the gas exhaust temperature for an offshore low-pressure compressor are only of limited value in of itself. But combined with ambient temperature, wind speed, compressor pump speed, history of previous maintenance actions, and maintenance logs, this real-time data can create a valuable alarm system for offshore rig operators. 3. Working with data lakes Today, storing large amounts of disparate data by putting it all in one infrastructure location does not reduce data complexity any more than letting data sit in siloed enterprise systems. 4. Ensuring data consistency, referential integrity, and continuous downstream use A fourth big data challenge is representing all existing data as a unified image, keeping this image updated in real-time and updating all downstream analytics that use these data. Data arrival rates vary by system, data formats from source systems change, and data arrive out of order due to networking delays. 5. Enabling new tools and skills for new needs Enterprise IT and analytics teams need to provide tools that enable employees with different levels of data science proficiency to work with large data sets and perform predictive analytics using a unified data image. Let’s look at what’s involved in developing and deploying AI applications at scale Data assembly and preparation The first step is to identify the required and relevant data sets and assemble them. There are often issues with data duplication, gaps in data, unavailable data and data out of sequence. Feature engineering This involves going through the data and crafting individual signals that the data scientists and domain experts think will be relevant to the problem being solved. In the case of AI-based predictive maintenance, signals could include the count of specific fault alarms over the trailing 7 days,14 days and 21 days, the sum of the specific alarms over the same trailing periods; and the maximum value of certain sensor signals over those trailing periods. Labelling the outcomes This step involves labeling the outcomes the model tries to predict. For example, in AI-based predictive maintenance applications, source data sets rarely identify actual failure labels, and practitioners have to infer failure points based on a combination of factors such as fault codes and technician work orders. Setting up the training data For classification tasks, data scientists need to ensure that labels are appropriately balanced with positive and negative examples to provide the classifier algorithm enough balanced data. Data scientists also need to ensure the classifier is not biased with artificial patterns in the data. Choosing and training the algorithm Numerous algorithm libraries are available to data scientists today, created by companies, universities, research organizations, government agencies and individual contributors. Deploying the algorithm into production Machine learning algorithms, once deployed, need to receive new data, generate outputs, and have some actions or decisions be made based on those outputs. This may mean embedding the algorithm within an enterprise application used by humans to make decisions – for example, a predictive maintenance application that identifies and prioritizes equipment requiring maintenance to provide guidance for maintenance crews. This is where the real value is created – by reducing equipment downtime and servicing costs through more accurate failure prediction that enables proactive maintenance before the equipment actually fails. In order for the machine learning algorithms to operate in production, the underlying compute infrastructure needs to be set up and managed. Close-loop continuous improvement Algorithms typically require frequent retraining by data science teams. As market conditions change, business objects and processes evolve, and new data sources are identified. Organizations need to rapidly develop, retrain, and deploy new models as circumstances change. Therefore, problems that have to be addressed to solve AI computing problems are nontrivial. Massively parallel elastic computing and storage capacity are prerequisites. In addition to the cloud, there is a multiplicity of data services necessary to develop, provision, and operate applications of this nature. However, the price of missing a transformational strategic shift is steep. The corporate graveyard is littered with once-great companies that failed to change. This article originally appeared on Makeen Technologies.
The Future of the Stock Market: Machine Learning-based Predictions
Since the arrival of automated investment and artificial intelligence in the stock markets, the search for the Holy Grail of stock market investment has been to develop and refine the algorithm that would allow for predicting the behavior of the stock market and the actions of the companies in the future listed. Needless to say, knowing how to predict the future trends of stocks translates into cash and sound money, and it is also necessary to act based on those predictions ahead of other investors, before the scenario is discounted by all in the market. And now there is a new generation of Machine Learning (ML) that yields a success rate in the future that cannot be the result of mere chance: yes, ML already hits a very high percentage, and also with a success rate much larger than the vast majority of human stock advisors, 79% and even 90% in certain cases. In the stock market, first having always meant earning more money or losing less. Being the first to negotiate literally translates into money. It is taking it for granted that “information is power”, and operating with an anticipatory vision where others are disoriented and giving “blind sticks” in the markets. It goes without saying that in the second it is usually them who end up losing the sticks of losses because there is nothing worse in the bags than having no more strategy than a few misleading hunches. In this, an automated investment may be contributing a lot to the markets, since it establishes clear and synthetic investment rules, and avoids the scenario of breaking them down by human passions that are very dangerous for your pockets, such as euphoria or panic. But even with an AI that will obviously be marketed, the more massively the better, it is highly likely that those predictions in the future will be available to many investors -human or synthetic. And under this scenario, when many in the market have that prediction with a high probability of being fulfilled, again it must be said that being the first to negotiate will result in money again, with the addition that now the speed will be absolutely decisive to shed profits or losses on each operation. Only technical analysis is used as a tool for stock predictions because it is considered to be easy-going for the algorithm to learn and the human to interpret, giving predictions where there is only one attribute i.e., historic prices of stock. The current algorithm gives predictions of one single stock that is given as input to find future predictions. Here are a few companies that use Machine Learning in Technical analysis for stock prediction: ● Trading Technologies. ● GreenKey Technologies. ● Kavout. ● Auquan. ● Epoque. ● Sigmoidal. ● Equbot. ● AITrading. Recently an Israel based stock forecast company named ‘I Know First’ using predictive Artificial Intelligence demonstrated an accuracy of up to 97% in its predictions for S&P 500 and Nasdaq indices, as well as their respective ETFs. So there’s a lot that can be achieved or explored with the use of Machine Learning in stock prediction.AI is just a new twist to what has already been the virtualization of markets since the arrival of automated investment. As we said before, this profitable telecommunications-operational symbiosis is not exactly something new as it has been that way since the dawn of automated investment some five years ago. It is true that it began by taking no significant benefits from the small (even imperceptible) market fluctuations, in which the agility of the operation was fundamental since these spikes in share prices can last even for a fraction of a second. If one was able to operate in the same order of temporal magnitude, there was a possible benefit to be taken out of the market. But what is really news now is that, as we will analyze it, this ultra-rapid factor in the operation acquires double relevance under the scenario of the existence of a successful AI algorithm. We must emphasize that these algorithms may be contributing to improve price formation and to make the market work better, but the negative side is to delegate human decision-making capacity to algorithms that know how they will react to black swan events. Indeed, we said before that human error is being carried away by euphoria or panic, but we said this assuming regular conditions. In scenarios of volatility not suitable for cardiac and black swans, although many investors can continue to fall prey to those unprofitable passions, there is the moment when the value of a mature, professional, and experienced manager is literally worth in gold, being the moment when he should take the helm. It is necessary to consider as a requirement of the software architecture of the investment programs that something like the automatic pilot of an airplane is implemented: in regular conditions, the aircraft is perfectly piloted by the automated system, but when things look rough, the pilot can regain control of the ship and get passengers out of vital trouble. The automated investment must take these same precautions because today, the human mind is still infinitely more intuitive and analytical than an algorithm that after all is based on historical data which may sometimes not serve as a seed to iterate the learning of artificial intelligence. This can be especially so when we must weigh factors of subjective perception, which can also have a strong influence on the market, and whose subjective complexity is a great degree of added difficulty for an objective robot, not to mention the global cost of continually training with recurring iterations a multitude of investment robots across the planet. Bid farewell to the simple real-time investment that was new in the 90s and welcome the era of the real-time market investments. We will operate based on an ephemeral ever-changing market scenario, which will cease to exist as soon as we do it together with a certain critical mass of investors.
The Future of HR from 2020: Machine Learning & Deep Learning
The future of HR lies in Deep Learning which is steroid machine learning. It uses a technique that gives machines an improved ability to find, and amplify, even the smallest patterns. This technique is called a deep neural network: deep because it has many layers of simple computational nodes that work together to search for data and deliver a final result in the form of prediction. Neural networks were vaguely inspired by the inner workings of the human brain. The nodes are like neurons and the network is like the brain itself. But Hinton published his breakthrough at a time when neural networks had gone out of style. No one really knew how to train them, so they were not giving good results. The technique took almost 30 years to recover. But suddenly, it emerged from the abyss. One last thing we should know in this introduction: machine learning (and deep) comes in three packages: supervised, unsupervised and reinforced. In supervised learning, the most frequent, the data is labeled to indicate to the machine exactly what patterns to look for. Think of it as something like a tracking dog that will chase the targets once you know the wrapper you’re looking for. That’s what you are doing when you press play on a Netflix program: you are telling the algorithm to find similar programs. In unsupervised learning, the data has no tags. The machine only searches for any pattern it can find. This is like letting a person check tons of different objects and classify them into groups with similar wrappers. Unsupervised techniques are not as popular because they have less obvious applications but curiously, they have gained strength in cybersecurity. Finally, we have reinforcement learning, the last frontier of machine learning. A reinforcement algorithm learns by trial and error to achieve a clear objective. He tries many different things and is rewarded or penalized depending on whether his behaviors help or prevent him from reaching his goal. This is like when a child behaves well with a praise and affection. Reinforcement learning is the basis of Google’s AlphaGo, the program that surpasses the best human players in the complex Go game. Applied to Human Resources, although the growth potential is wide, the current use of Machine Learning is limited and presents a dilemma that must be resolved in the future, related to the ability of machines to discover talent in human beings, beyond their hard and verifiable competencies, such as level of education, etc. Software intelligence is transforming human resources. At the moment it has its main focus on recruitment processes, which in most cases is a very expensive and inefficient process where our goal is to find the best candidates among thousands of them, although we can find multiple application examples. A first example would be the development of technology that would allow people to create job descriptions that are gender-neutral to attract the best possible candidates, whether male or female. This would boost a group of job seekers and a more balanced population of employees. A second example is the training recommendations that employees could receive. On many occasions these employees have many training options, but often they cannot find what is most relevant to them; Therefore, these algorithms present the internal and external courses that best suit the employee’s development objectives based on many variables, including the skills that the employee intends to develop and the courses taken by other employees with similar professional objectives. A third example will be Sentiment Analysis, which is a form of NLP (Natural Language Processing) that analyzes the social conversations that are generated on the Internet to identify opinions and extract the emotions (positive, negative or neutral) that these implicitly carry. With the sentiment analysis it is determined: -Who is the subject of the opinion. -About what is being said. -How is the opinion: positive, negative or neutral. This tool can be applied to words and expressions, as well as phrases, paragraphs and documents that we find in social networks, blogs, forums or review pages. The sentiment analysis will determine the hidden connotation behind the information that is subjective. There are different systems of sentiment analysis: -Analysis of feeling by polarity: Opinions are classified as very positive, positive, neutral, negative or very negative. This type of analysis is very Simple with reviews made with scoring mechanisms from 1 to 5, where number 1 is very negative and 5 is very positive. -Analysis of feeling by type of emotion: The analysis detects emotions and specific feelings: happiness, sadness, anger, frustration, etc. For this, there is usually a list of words and the feelings with which they are usually related. -Sentiment analysis by intention: This system interprets the comments according to the intention behind: Is it a complaint? A question? A request? A fourth example is the Employee Attrittion through which we can predict which employees will remain in the company and which will not be based on several parameters as shown in the following example- These four cases are clear examples in which Machine Learning elevates the role of human resources from tactical processes to strategic processes. Smart software is enabling the mechanics of workforce management, such as creating job applications, recommending courses or predicting which employees are more likely to leave the company, giving the possibility to react in time and apply corrective policies for those deficiencies. From the business point of view, machine learning technology is an opportunity to drive greater efficiency and better efficiency in decision making. This will help everyone to make better decisions and, equally important, will give Human Resources a strategic and valuable voice at the executive level. Prof https://mlafrica.com/wp-content/uploads/2020/04/Stock-Trading-with-Artificial-Intelligence.jpeg Villamarin Rodriguez
BABYLON: THE GROWING AI TREND IN THE HEALTHCARE INDUSTRY
Artificial intelligence is not new, yet there have been quick advances in the field as of late. This has to a limited extent been empowered by improvements in processing power and the colossal volumes of advanced information that are presently produced. A wide scope of utilizations of AI are currently being investigated with significant open and private speculation and premium. The UK Government reported its aspiration to make the UK a world head in AI and information advancements in its 2017 Industrial Strategy. In April 2018, a £1bn AI part bargain between UK Government and industry was reported, including £300 million towards AI research. AI is commended as having the capacity to help address significant wellbeing challenges, for example, meeting the consideration needs of a maturing populace. Significant innovation organizations – including Google, Microsoft, and IBM – are putting resources into the improvement of AI for human services and research. The quantity of AI new businesses has likewise been consistently increasing. There are a few UK based organizations, some of which have been set up as a team with UK colleges and clinics. Organizations have been framed between NHS suppliers and AI engineers, for example, IBM, DeepMind, Babylon Health, and Ultromics. Healthcare Organization – Artificial intelligence can possibly be utilized in arranging and asset assignment in wellbeing and social consideration administrations. For instance, the IBM Watson Care Manager framework is being guided by Harrow Council with the point of improving cost productivity. It matches people with a consideration supplier that addresses their issues, inside their distributed consideration spending plan. It additionally structures singular consideration plans and claims to offer bits of knowledge for increasingly successful utilization of care the executive’s resources. AI is likewise being utilized with the point of improving patient experience. Birch Hey Children’s Hospital in Liverpool is working with IBM Watson to make a ‘psychological medical clinic’, which will incorporate an application to encourage collaborations with patients. The application means to distinguish persistent tensions before a visit, give data on request, and furnish clinicians with data to assist them with delivering suitable medications. Medical Research – Artificial intelligence can be utilized to dissect and distinguish designs in enormous and complex datasets quicker and more decisively than has recently been possible. It can likewise be utilized to look the logical writing for pertinent investigations, and to consolidate various types of information; for instance, to help sedate discovery. The Institute of Cancer Research’s jars AR database joins hereditary and clinical information from patients with data from logical research and uses AI to make forecasts about new focuses for malignancy drugs. Researchers have built up an AI ‘robot researcher’ called Eve which is intended to make the procedure of medication disclosure quicker and more economical. (K.Williams, 2015) AI frameworks utilized in human services could likewise be significant for restorative research by coordinating reasonable patients to clinical examinations. Clinical Care – Artificial intelligence can possibly help the analysis of illness and is presently being trialed for this reason in some UK emergency clinics. Utilizing AI to investigate clinical information, examine distributions, and expert rules could likewise advise choices about treatment PATIENT AND CONSUMER IMPACT APPLICATIONS – A few applications that utilization AI to offer customized wellbeing appraisals and home consideration exhortation are as of now available. The application Ada Health Companion utilizes AI to work a talk bot, which joins data about side effects from the client with other data to offer conceivable diagnoses. GP at Hand, a comparative application created by Babylon Health, is as of now being trialed by a gathering of NHS medical procedures in London. Information devices or visit bots driven by AI are being utilized to help with the administration of constant ailments. For instance, the Arthritis Virtual Assistant created by IBM for Arthritis Research UK is learning through associations with patients to give customized data and guidance concerning prescriptions, diet, and exercise. (Release, 2017) Government-financed and business activities are investigating manners by which AI could be utilized to control mechanical frameworks and applications to help individuals living at home with conditions, for example, beginning time dementia. Man-made intelligence applications that screen and bolster tolerant adherence to recommended drug and treatment have been trialed with promising outcomes, for instance, in patients with tuberculosis. (L.Shafner, 2017) Other apparatuses, for example, Sentrian, use AI to examine data gathered by sensors worn by patients at home. The point is to identify indications of decay to empower early mediation and avoid medical clinic affirmations. PUBLIC HEALTH – Artificial intelligence can possibly be utilized to help early location of irresistible malady flare-ups and wellsprings of pandemics, for example, water contamination. (B.Jacobsmeyer, 2012) AI has likewise been utilized to anticipate unfavourable medication responses, which are assessed to cause up to 6.5 percent of emergency clinic affirmations in the UK. Babylon a UK fire up plans to “put an open and reasonable wellbeing administration in the hands of each individual on earth” by putting man-made brainpower (AI) apparatuses to work. Right now, the organization has activities in the UK and Rwanda and plans to extend to the Middle East, the United States, and China. The organization’s technique is to consolidate the intensity of AI with the medicinal aptitude of people to convey unrivalled access to human services. How does Babylon’s AI work? A submitted group of research researchers, architects, specialists and disease transmission experts are cooperating to create and enhance Babylon’s AI capacities. A great part of the collaboration is on the advancement of bleeding edge AI explore; this is being passed through access to enormous volumes of information from the therapeutic network, constant gaining from our very own clients and through input from Babylon’s very own specialists. The knowledge graph and user graph: Babylon’s Knowledge Graph is one of the biggest organized medicinal information bases on the planet. It catches human information on present day medication and is encoded for machines. We utilize this as the reason for Babylon’s clever parts to address one another. The Knowledge Graph monitors the significance behind therapeutic phrasing