Artificial intelligence: Building simple AI models using machine learning algorithms for business analysis.

Lesson 61/77 | Study Time: Min


Artificial intelligence: Building simple AI models using machine learning algorithms for business analysis.


🔍 Artificial intelligence: Building simple AI models using machine learning algorithms for business analysis

Artificial intelligence (AI) has become a game-changer in various industries, including business analysis. By leveraging machine learning algorithms, businesses can unlock valuable insights from their data and make data-driven decisions. In this step, we will explore the process of building simple AI models for business analysis.

🔹 Understanding Artificial Intelligence:

Artificial intelligence refers to the development of computer systems that can perform tasks that typically require human intelligence. Machine learning is a subset of AI that enables computers to learn from data and improve their performance over time without being explicitly programmed.

🔹 Importance of AI in Business Analysis:

AI has revolutionized business analysis by enabling organizations to extract meaningful patterns and trends from large volumes of data. By utilizing machine learning algorithms, businesses can:

✅ Identify patterns: AI algorithms can identify hidden patterns within data that may not be apparent to humans. These patterns can provide insights into customer behavior, market trends, and operational efficiencies.

✅ Make predictions: AI models can analyze historical data to make predictions about future outcomes. For example, predictive models can forecast customer churn, sales revenue, or demand for products.

✅ Automate processes: AI can automate repetitive tasks, saving time and resources. For instance, AI-powered chatbots can handle customer queries, freeing up human resources for more complex tasks.

🔹 Building Simple AI Models:

Building AI models for business analysis involves several key steps:

1️⃣ Data Collection: Gather relevant data from various sources, such as customer interactions, sales records, or social media data. This data will serve as the input for the AI model.

2️⃣ Data Preprocessing: Clean and preprocess the data to remove noise, handle missing values, and transform it into a suitable format for analysis. This step ensures the quality and consistency of the data.

3️⃣ Feature Engineering: Extract meaningful features from the data that can contribute to the accuracy of the AI model. These features can include customer demographics, purchase history, or sentiment scores.

Example:

# Example of feature engineering in Python

import pandas as pd

from nltk.sentiment import SentimentIntensityAnalyzer


# Load Twitter data

tweets = pd.read_csv('twitter_data.csv')


# Perform sentiment analysis

sia = SentimentIntensityAnalyzer()

tweets['sentiment_score'] = tweets['text'].apply(lambda x: sia.polarity_scores(x)['compound'])


# Extract features

tweets['word_count'] = tweets['text'].apply(lambda x: len(x.split()))

tweets['hashtag_count'] = tweets['text'].apply(lambda x: x.count('#'))


4️⃣ Model Selection: Choose an appropriate machine learning algorithm based on the nature of the problem and the available data. Some commonly used algorithms for business analysis include linear regression, decision trees, and support vector machines.

5️⃣ Model Training: Split the data into training and testing sets and train the AI model using the training data. The model learns from the patterns in the training data to make accurate predictions.

6️⃣ Model Evaluation: Assess the performance of the AI model using evaluation metrics such as accuracy, precision, recall, or mean squared error. This step helps identify any shortcomings of the model and fine-tune it if necessary.

7️⃣ Model Deployment: Once the AI model demonstrates satisfactory performance, it can be deployed to make predictions on new, unseen data. This can be done through web applications, APIs, or integrated into existing business systems.

Example:

# Example of model training and evaluation in Python

from sklearn.model_selection import train_test_split

from sklearn.linear_model import LogisticRegression

from sklearn.metrics import accuracy_score


# Split data into training and testing sets

X_train, X_test, y_train, y_test = train_test_split(features, target, test_size=0.2, random_state=42)


# Train a logistic regression model

model = LogisticRegression()

model.fit(X_train, y_train)


# Make predictions on the test data

predictions = model.predict(X_test)


# Evaluate the model

accuracy = accuracy_score(y_test, predictions)

print(f"Model Accuracy: {accuracy}")


🔹 Real-World Examples:

1️⃣ Fraud Detection: Businesses can use AI models to detect fraudulent transactions by analyzing patterns and anomalies in customer behavior.

2️⃣ Customer Segmentation: AI algorithms can segment customers based on their purchasing behavior, demographics, or preferences. This enables businesses to tailor marketing strategies and provide personalized experiences.

3️⃣ Demand Forecasting: AI models can analyze historical sales data to predict future demand for products or services. This allows businesses to optimize inventory management and production planning.

In conclusion, building simple AI models using machine learning algorithms can empower businesses to gain valuable insights, automate processes, and make data-driven decisions. By understanding the concepts and following the steps outlined above, businesses can leverage the power of AI to enhance their business analysis capabilities.


Understanding Artificial Intelligence and Machine Learning

  • Definition of Artificial Intelligence and its applications in business analysis

  • Introduction to Machine Learning and its role in building AI models

  • Types of Machine Learning algorithms used for business analysis

Let's Dive Deeper into Artificial Intelligence 👨‍💻🤖

We've all heard about Artificial Intelligence (AI), a revolutionary technology that's making waves in every corner of the business industry. But what does it actually mean? AI is the simulation of human intelligence processes by machines, particularly computer systems. It involves learning from data, reasoning to reach approximate or definite conclusions, and self-correction.

Take, for instance, the popular voice-enabled AI assistant, Siri. Siri learns from user interactions and tailors responses accordingly, showing the concept of AI in action. Similarly, AI in business analysis can help interpret extensive data, identify patterns, and provide insights, helping businesses make data-driven decisions.

# Example of AI in action: Chatbot

class ChatBot:

  def __init__(self, name):

    self.name = name


  def respond_to(self, query):

    # AI algorithm to understand the query and provide an answer

    return "This is an AI response to the query"


Machine Learning: The Backbone of AI Models 🧠🤔

Machine Learning (ML) is an application of AI. It provides systems with the ability to automatically learn and improve from experience without being explicitly programmed. In other words, ML models can learn from the data they are fed and make predictions or decisions based on this learning.

Think about Netflix's recommendation algorithm, a prime example of ML. It uses user's past viewing history and ratings to suggest new movies or series, thereby enhancing the user experience.

In the context of business, ML algorithms can be used to identify customer behavior patterns, forecast sales, detect fraud, and more.

# Example of ML in action: Recommendation System

class RecommendationSystem:

  def __init__(self, user_data):

    self.user_data = user_data


  def recommend(self):

    # ML algorithm to analyze user data and make recommendations

    return "These are the recommended items based on the user's history"


Various Types of Machine Learning Algorithms Used in Business Analysis 🧮📉

There are multiple types of ML algorithms, each with its strengths and use-cases. Here, we focus on three main types used in business analysis:

1. Supervised Learning: In this type of ML, the model is trained on a labeled dataset. For example, a spam detection model trained with emails labeled as 'spam' or 'not spam'. It is then able to classify new emails based on this training.

2. Unsupervised learning: Here, the model is provided with an unlabeled dataset and it uncovers patterns on its own. A common application is in market segmentation where it identifies different customer groups based on buying behavior.

3. Reinforcement Learning: In this type, an agent learns to behave in an environment, by performing certain actions and observing the results/rewards. For example, optimizing the delivery route in real-time for a delivery service.

# Example of Supervised Learning: Spam Detection

class SpamDetector:

  def __init__(self, labeled_emails):

    self.labeled_emails = labeled_emails


  def detect_spam(self, new_email):

    # ML algorithm to classify the new email based on the labeled emails

    return "This email is classified as either 'spam' or 'not spam'"


By understanding AI and ML, you can build simple AI models for business analysis by selecting the right ML algorithm based on the available data and the problem at hand. Remember, the goal is to turn data into information and information into insight. 🎯


Data Preprocessing for AI Model Building

  • Data cleaning and handling missing values

  • Feature selection and feature engineering techniques

  • Splitting the data into training and testing sets

How Dirty Data Can Throw a Wrench in Your AI Model

We often hear the phrase, "Garbage in, Garbage out" when it comes to machine learning models. This isn't some catchy phrase, but a stark reality in the world of data science. Imagine you were working on a critical project at an investment firm to predict the stock market trends. Your model is trained using high-quality data and is performing pretty well. Suddenly, you notice that the predictions go haywire. On investigating, you find that the new data you are feeding into the model has missing values. This is a classic example of how missing or dirty data can play havoc with your AI models.

💡Data Cleaning and Handling Missing Values

Data cleaning refers to the process of detecting, correcting, or removing corrupt or inaccurate records from a dataset. It involves various techniques such as removing duplicates, correcting errors, handling missing values, etc. The main goal here is to improve the quality and reliability of your data.

For instance, consider an AI model for predicting customer churn at a telecom company. The dataset might have missing values for some features like 'tenure', 'monthly charges', etc. These missing values could be due to various reasons like data entry errors, faulty data extraction process, etc.

In Python, libraries such as Pandas and Numpy offer useful functions to handle missing data. For instance, you may choose to fill missing values with mean, median or mode of the relevant columns. Alternatively, you may also drop the rows or columns having missing values.

# Importing pandas library

import pandas as pd


# Creating a data frame

df = pd.DataFrame({'A': [1, 2, np.nan],

                   'B': [5, np.nan, np.nan],

                   'C': [1, 2, 3]})


# Filling missing values with mean of the column

df['A'].fillna(value=df['A'].mean(), inplace=True)


🔎 Feature Selection and Feature Engineering Techniques

Feature selection refers to the process of selecting a subset of relevant features for use in model construction. Feature engineering, on the other hand, involves creating new features from existing ones to improve model performance.

Let's take the telecom customer churn prediction model as an example again. The raw data may contain hundreds of features about each customer. But not all of these may be relevant to predict churn. This is where feature selection comes in. Techniques like correlation matrix, chi-square test, recursive feature elimination can be used for feature selection.

Next comes feature engineering. Consider the 'tenure' feature, which is the number of months the customer has stayed with the company. We can engineer a new feature 'tenure_bin' that categorizes tenure into 'short-term', 'medium-term', 'long-term' etc. This can sometimes improve the model's performance.

# feature engineering - creating a new feature 'tenure_bin'

conditions = [

    (df['tenure'] <= 12),

    (df['tenure'] > 12) & (df['tenure'] <= 24),

    (df['tenure'] > 24) & (df['tenure'] <= 60),

    (df['tenure'] > 60)]

choices = ['short_term', 'medium_term', 'long_term', 'very_long_term']

df['tenure_bin'] = np.select(conditions, choices, default='unknown')


🚆 Splitting the Data into Training and Testing Sets

Finally, the dataset must be divided into two sets - a training set and a testing set. The training set is used to train the machine learning model, while the testing set is used to evaluate the model's performance.

A common practice is to split the data into 70% training set and 30% testing set. It is crucial to ensure that the train and test sets should be similar in terms of features and distribution.

# Importing train_test_split from sklearn

from sklearn.model_selection import train_test_split


# Splitting the data into training and testing sets

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)


In conclusion, data preprocessing is a critical step in building AI models. It is this step that sets the base for the performance of your model. So, never underestimate the power of good quality, clean data!


Building Regression Models for Business Analysis

  • Introduction to regression analysis and its applications in business analysis

  • Linear regression and its assumptions

  • Building and evaluating a simple linear regression model for business analysis

When Businesses Met Regression Analysis

Let's start with an interesting fact: A study by McKinsey suggests that organizations that leverage big data and analytics have seen productivity rates and profitability that are 5% to 6% higher than those of their peers. One such powerful tool that helps businesses to leverage big data is Regression Analysis, an essential component of predictive analytics.

Regression Analysis📈 is a powerful statistical tool that allows us to examine the relationship between two or more variables of interest. For businesses, it can be used to predict sales in the future, evaluate trends, or even assess the impact of a social media marketing campaign.

Unpacking Linear Regression and Its Assumptions

At the heart of regression analysis is Linear Regression📏, a basic predictive analytics technique. It assumes a linear relationship between input variables (or independent variables) and a single output variable (or dependent variable).

Let's illustrate this with an example: suppose your business wants to predict future sales based on past advertising spend. In this case, advertising spend is your input variable, and sales is your output variable.

There are a few key assumptions that this Linear Regression model makes:

  • Linearity: The relationship between the input and output variables is linear.

  • Independence: Observations are independent of each other.

  • Homoscedasticity: The variance around the regression line is the same for all values of the predictor variable.

  • Normality: The errors of the prediction will follow a normal distribution.

Violations of these assumptions can lead to inaccurate predictions, and misinterpretation of the relationship between variables.

Crafting a Linear Regression Model for Business Analysis

Now, let's dive into how to build a linear regression model. For this, consider we have a dataset that contains information about a company's advertising spend and its sales figures for the past year.

The first step is to split this dataset into a training set and a testing set. This enables us to evaluate the model performance later.

from sklearn.model_selection import train_test_split


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)


Next, we fit the data to our linear regression model using the training set.

from sklearn.linear_model import LinearRegression


model = LinearRegression()

model.fit(X_train, y_train)


Once we've established a model, we can use it to predict sales based on the testing set.

y_pred = model.predict(X_test)


Assessing the Model's Performance

Evaluating the model is a crucial step in Machine Learning✅. One common method of evaluating a regression model's performance is by computing the Root Mean Squared Error (RMSE), which measures the average magnitude of the error.

from sklearn.metrics import mean_squared_error


rmse = mean_squared_error(y_test, y_pred)**0.5


The lower the RMSE, the better our model's predictions match the actual values.

Regression models are an integral part of business analysis, unlocking the power of big data and predictive analytics for organizations. They help uncover hidden patterns and trends, enabling better decision-making, and ultimately, driving growth.

Building Classification Models for Business Analysis

  • Introduction to classification analysis and its applications in business analysis

  • Logistic regression and its assumptions

  • Building and evaluating a logistic regression model for business analysis

A Real-World Scenario: Predicting Customer Churn

Imagine this: You are a data scientist working for a large telecommunications company. Your task? Predict which customers are likely to churn (cancel their subscription) based on their behavior and data. This is a classic example of a classification problem, one of the most common applications of machine learning in business analysis.

Unraveling the Mysteries of Classification Analysis

Classification analysis 🔍 is a subfield of machine learning that focuses on predicting categorical class labels in a dataset. These labels represent different categories that a data point can belong to. In the customer churn scenario, the classes could be "will churn" or "won't churn".

Classification models allow us to predict outcomes and understand which variables contribute to these outcomes. This places it at the heart of business analysis, where it is used to inform strategic decisions, optimize marketing campaigns, detect fraud, and much more.

Logistic Regression: The Go-To Method for Classification

A predominant algorithm used in classification problems is logistic regression 🧮. Unlike linear regression, which predicts a continuous outcome, logistic regression predicts the probability that a data point belongs to a particular class.

Its underlying assumptions include:

  • The dependent variable is binary or ordinal.

  • There are no severe outliers or high-leverage points.

  • There is a linear relationship between any continuous predictors and the logit of the response variable.

  • There is not high multicollinearity among predictors.

It's important to remember that if these assumptions are violated, the model's predictions and interpretations may be inaccurate.

Constructing a Logistic Regression Model: A Step-by-Step Guide

Let's go back to our customer churn scenario. To build a logistic regression model for this, we would follow these steps:

  1. Define the Problem: Identify the dependent variable (customer churn) and the independent variables (customer behavior and data).

  2. Prepare the Data: Cleanse and preprocess the data to ensure it's suitable for logistic regression. This may involve handling missing values, dealing with outliers, or converting categorical variables into dummy variables.

  3. Train the Model: Split the dataset into a training set and a test set. Use the training set to train the logistic regression model.

  4. Test the Model: Use the test set to evaluate the performance of the model.

from sklearn.model_selection import train_test_split

from sklearn.linear_model import LogisticRegression

from sklearn import metrics


# define the dependent and independent variables

X = dataset.drop('churn', axis=1)

y = dataset['churn']


# split the dataset into a training set and a test set

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)


# train the logistic regression model

logreg = LogisticRegression()

logreg.fit(X_train, y_train)


# test the logistic regression model

y_pred = logreg.predict(X_test)

print('Accuracy of logistic regression classifier on test set: {:.2f}'.format(logreg.score(X_test, y_test)))


Evaluating our Logistic Regression Model

After building the model, we need to determine whether it's good enough for practical use. We assess a classification model's performance using metrics like accuracy, precision, recall, F1 score, and AUC-ROC.

In the code above, we use accuracy as our metric. It’s the ratio of correct predictions to total predictions. However, in certain scenarios where the class distribution is highly imbalanced, other metrics may be more appropriate.

With a comprehensive understanding of classification analysis and logistic regression, you can now start leveraging these tools in your business. Whether it's predicting customer churn, optimizing marketing campaigns, or detecting fraud, classification models offer a way to make more data-driven decisions.


Building Decision Tree Models for Business Analysis

  • Introduction to decision tree analysis and its applications in business analysis

  • Building and evaluating a decision tree model for business analysis

  • Handling overfitting and improving model performance

Note: The outlines provided above are just a starting point and can be expanded upon with more detailed information and examples as needed

Using Decision Trees for Business Analysis 🌳

Ever wondered how the Netflix algorithm seems to know exactly what movies or TV shows you'd love to watch next? Or how your credit card company can quickly detect suspicious activities on your account? The answer lies in machine learning models, specifically decision trees, which are widely used in various business applications for data analysis.

What is Decision Tree Analysis? 🌳🔍

Decision Tree Analysis is a predictive modeling tool that uses a tree-like model of decisions and their possible consequences. It's like playing a game of '20 questions' with your data - each question helps you zero in on the answer you're looking for. Businesses use decision tree models for various tasks including customer segmentation, fraud detection, risk management, and recommendation systems.

For instance, a bank could use a decision tree model to predict whether a potential customer would default on a loan or not. The model would look at various factors (nodes) such as income, employment status, credit score, etc., and make a series of decisions to reach a conclusion.

Building a Decision Tree Model 🛠️🌳

Let's dive into how you can build a decision tree model. Typically, this involves three main steps: data pre-processing, model training, and model evaluation.

Data Pre-Processing: This step involves preparing your data for the model. It may include dealing with missing values, encoding categorical variables, splitting the data into training and testing sets, and normalizing numeric attributes.

Model Training: During this step, the model 'learns' from the training data. Each node in the tree represents a feature (or attribute), and each branch represents a rule decision. The goal is to create a model that makes accurate predictions with minimal complexity.

Model Evaluation: Finally, you'd evaluate the model using the test data. Common metrics for evaluation include accuracy, precision, recall, and the F1 score.

Here's an example of how you can build a decision tree model in Python using the Scikit-learn library:

from sklearn.model_selection import train_test_split

from sklearn.tree import DecisionTreeClassifier

from sklearn.metrics import classification_report


# Split data into training and testing sets

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)


# Create a decision tree classifier object

clf = DecisionTreeClassifier()


# Train the model using the training sets

clf.fit(X_train, y_train)


# Predict the response for the test dataset

y_pred = clf.predict(X_test)


# Print the classification report

print(classification_report(y_test, y_pred))


Handling Overfitting and Improving Model Performance 🛠️🚀

While decision trees are easy to understand and interpret, they can often become too complex, leading to overfitting. An overfitted model performs well on the training data but fails to generalize to unseen data. This is akin to memorizing the answers to an exam, but failing when presented with similar but not identical questions.

To prevent overfitting, you can restrict the depth of the tree, limit the minimum number of samples required at a leaf node, or set the maximum number of leaf nodes. These techniques are known as pruning.

Furthermore, you can use ensemble methods like Random Forests or Gradient Boosting to improve the performance of your decision tree model. These methods combine multiple decision trees to produce a more accurate and robust model.

Decision tree models are a powerful tool for any business analyst's arsenal. With them, you can make data-driven decisions, uncover hidden patterns in your data, and unlock valuable insights that can help propel your business forward. Happy data mining!


UE Campus

UE Campus

Product Designer
Profile

Class Sessions

1- Introduction 2- Import and export data sets and create data frames within R and Python 3- Sort, merge, aggregate and append data sets. 4- Use measures of central tendency to summarize data and assess symmetry and variation. 5- Differentiate between variable types and measurement scales. 6- Calculate appropriate measures of central tendency based on variable type. 7- Compare variation in two datasets using coefficient of variation. 8- Assess symmetry of data using measures of skewness. 9- Present and summarize distributions of data and relationships between variables graphically. 10- Select appropriate graph to present data 11- Assess distribution using Box-Plot and Histogram. 12- Visualize bivariate relationships using scatter-plots. 13- Present time-series data using motion charts. 14- Introduction 15- Statistical Distributions: Evaluate and analyze standard discrete and continuous distributions, calculate probabilities, and fit distributions to observed. 16- Hypothesis Testing: Formulate research hypotheses, assess appropriate statistical tests, and perform hypothesis testing using R and Python programs. 17- ANOVA/ANCOVA: Analyze the concept of variance, define variables and factors, evaluate sources of variation, and perform analysis using R and Python. 18- Introduction 19- Fundamentals of Predictive Modelling. 20- Carry out parameter testing and evaluation. 21- Validate assumptions in multiple linear regression. 22- Validate models via data partitioning and cross-validation. 23- Introduction 24- Time Series Analysis: Learn concepts, stationarity, ARIMA models, and panel data regression. 25- Introduction 26- Unsupervised Multivariate Methods. 27- Principal Component Analysis (PCA) and its derivations. 28- Hierarchical and non-hierarchical cluster analysis. 29- Panel data regression. 30- Data reduction. 31- Scoring models 32- Multi-collinearity resolution 33- Brand perception mapping 34- Cluster solution interpretation 35- Use of clusters for business strategies 36- Introduction 37- Advance Predictive Modeling 38- Evaluating when to use binary logistic regression correctly. 39- Developing realistic models using functions in R and Python. 40- Interpreting output of global testing using linear regression testing to assess results. 41- Performing out of sample validation to test predictive quality of the model Developing applications of multinomial logistic regression and ordinal. 42- Selecting the appropriate method for modeling categorical variables. 43- Developing models for nominal and ordinal scaled dependent variables in R and Python correctly Developing generalized linear models . 44- Evaluating the concept of generalized linear models. 45- Applying the Poisson regression model and negative binomial regression to count data correctly. 46- Modeling 'time to event' variables using Cox regression. 47- Introduction 48- Classification methods: Evaluate different methods of classification and their performance in order to design optimum classification rules. 49- Naïve Bayes: Understand and appraise the Naïve Bayes classification method. 50- Support Vector Machine algorithm: Understand and appraise the Support Vector Machine algorithm for classification. 51- Decision tree and random forest algorithms: Apply decision trees and random forest algorithms to classification and regression problems. 52- Bootstrapping and bagging: Analyze the concepts of bootstrapping and bagging in the context of decision trees and random forest algorithms. 53- Market Baskets: Analyze transaction data to identify possible associations and derive baskets of associated products. 54- Neural networks: Apply neural networks to classification problems in domains such as speech recognition, image recognition, and document categorization. 55- Introduction 56- Text mining: Concepts and techniques used in analyzing unstructured data. 57- Sentiment analysis: Identifying positive, negative, or neutral tone in Twitter data. 58- SHINY package: Building interpretable dashboards and hosting standalone applications for data analysis. 59- Hadoop framework: Core concepts and applications in Big Data Analytics. 60- Artificial intelligence: Building simple AI models using machine learning algorithms for business analysis. 61- SQL programming: Core SQL for data analytics and uncovering insights in underutilized data. 62- Introduction 63- Transformation and key technologies: Analyze technologies driving digital transformation and assess the challenges of implementing it successfully. 64- Strategic impact of Big Data and Artificial Intelligence: Evaluate theories of strategy and their application to the digital economy, and analyze. 65- Theories of innovation: Appraise theories of disruptive and incremental change and evaluate the challenges of promoting and implementing innovation. 66- Ethics practices and Data Science: Assess the role of codes of ethics in organizations and evaluate the importance of reporting. 67- Introduction 68- Introduction and Background: Provide an overview of the situation, identify the organization, core business, and initial problem/opportunity. 69- Consultancy Process: Describe the process of consultancy development, including literature review, contracting with the client, research methods. 70- Literature Review: Define key concepts and theories, present models/frameworks, and critically analyze and evaluate literature. 71- Contracting with the Client: Identify client wants/needs, define consultant-client relationship, and articulate value exchange principles. 72- Research Methods: Identify and evaluate selected research methods for investigating problems/opportunity and collecting data. 73- Planning and Implementation: Demonstrate skills as a designer and implementer of an effective consulting initiative, provide evidence of ability. 74- Principal Findings and Recommendations: Critically analyze data collected from consultancy process, translate into compact and informative package. 75- Understand how to apply solutions to organisational change. 76- Conclusion and Reflection: Provide overall conclusion to consultancy project, reflect on what was learned about consultancy, managing the consulting. 77- Handle and manage multiple datasets within R and Python environments.
noreply@uecampus.com
-->