Advance Predictive Modeling

Lesson 38/77 | Study Time: Min


Developing models using binary logistic regression and assessing their performance


🔎 Interesting Fact: Binary logistic regression is a powerful statistical technique used to model the relationship between a binary dependent variable (where the outcome can only take one of two values) and one or more independent variables. It is widely used in various domains such as risk management, marketing, and clinical research. The aim of developing models using binary logistic regression is to predict the probability of the occurrence of a particular event based on the values of the independent variables.


📚 Main Idea: Developing models using binary logistic regression and assessing their performance involves several key steps, including determining when to use binary logistic regression, building realistic models using functions in R and Python, interpreting the output of global testing, and performing out-of-sample validation.


💡 Step 1: Evaluate when to use Binary Logistic Regression correctly. Before diving into model development, it is crucial to determine whether binary logistic regression is the appropriate technique for the specific problem at hand. Binary logistic regression is suitable when the dependent variable is binary, and the relationship between the independent variables and the outcome is expected to be logistic.


For example, let's say we want to predict whether a customer will churn or not based on their demographic and behavioral characteristics. Since the outcome variable "churn" can only take two values (churn or not churn), binary logistic regression would be an appropriate approach.


💡 Step 2: Develop realistic models using functions in R and Python. Once we have identified that binary logistic regression is suitable, the next step is to develop realistic models using functions in R and Python. These programming languages provide several libraries and functions to fit binary logistic regression models. Here's an example using Python's scikit-learn library:


from sklearn.linear_model import LogisticRegression


# Define the independent variables (X) and dependent variable (y)

X = df[['age', 'income', 'gender']]

y = df['churn']


# Fit the logistic regression model

model = LogisticRegression()

model.fit(X, y)


In this example, we define the independent variables (age, income, gender) and the dependent variable (churn). Then, we fit the logistic regression model using the LogisticRegression() function.


💡 Step 3: Interpret output of global testing using Linear Regression Testing to assess the results. After fitting the logistic regression model, it is essential to interpret the output of global testing to assess the results. One commonly used approach is the Wald test, which tests the null hypothesis that the coefficients of the independent variables are equal to zero.


The output of the logistic regression model provides information such as the coefficients, p-values, and odds ratios. For example, let's consider the coefficient of the "age" variable is 0.2, with a p-value of 0.03. This indicates that for a one-unit increase in age, the log-odds of the outcome variable increases by 0.2, and this increase is statistically significant at a 5% significance level.


💡 Step 4: Perform out-of-sample validation to assess model performance. To assess the performance of the logistic regression model and evaluate its predictive accuracy, it is crucial to perform out-of-sample validation. This involves splitting the dataset into training and testing sets, fitting the model on the training set, and then evaluating its performance on the testing set.


For example, let's split the dataset into 70% training and 30% testing:

from sklearn.model_selection import train_test_split


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)


# Fit the logistic regression model on the training set

model.fit(X_train, y_train)


# Predict the outcome variable for the testing set

y_pred = model.predict(X_test)


# Assess model performance using evaluation metrics (e.g., accuracy, precision, recall)


In this example, we split the dataset into training and testing sets using the train_test_split() function. Then, we fit the logistic regression model on the training set and predict the outcome variable for the testing set using the predict() function. Finally, we assess the model's performance using evaluation metrics such as accuracy, precision, and recall.


🌟 Real-Life Example: Let's consider a real-life example of using binary logistic regression in predicting customer churn for a telecommunications company. The company wants to identify customers who are likely to churn based on their calling patterns, billing information, and customer service ratings.


By developing a binary logistic regression model, the company can use customers' historical data (independent variables) such as call duration, monthly charges, contract type, and customer satisfaction ratings to predict the probability of churn (dependent variable). This information can help the company proactively target at-risk customers with retention offers, thereby reducing customer churn and improving business performance.


In conclusion, developing models using binary logistic regression and assessing their performance involve evaluating the suitability of logistic regression, building realistic models using programming languages like R and Python, interpreting the output of global testing, and performing out-of-sample validation. These steps are essential to ensure accurate predictions and informed decision-making in various domains.


Understand the concept of binary logistic regression

  • Definition of binary logistic regression

  • Difference between linear regression and logistic regression

  • Assumptions of binary logistic regression

The Intricacies of Binary Logistic Regression

Have you ever wondered how factors such as age, diet, or family history could influence the probability of developing a certain disease? This is where binary logistic regression comes into play. It's a statistical analysis method often used in the field of medical research to predict the likelihood of a binary outcome.


Binary Logistic Regression :question: is a type of statistical analysis that models the relationship between a binary dependent variable and one or more independent variables. Here, the dependent variable only takes on two values - for instance, "disease" or "no disease", "success" or "failure", or "default" or "no default".


For example, in the world of finance, binary logistic regression can be used to predict whether a customer will default on a loan (Yes, the customer will default vs. No, the customer will not default) based on various factors such as their credit score, employment status, and loan amount.

#Example of binary logistic regression

from sklearn.linear_model import LogisticRegression

X = data[['credit_score', 'employment_status', 'loan_amount']]

y = data['default']

logistic_regression= LogisticRegression()

logistic_regression.fit(X,y)

y_pred=logistic_regression.predict(X)


Distinguishing Binary Logistic Regression from Linear Regression


The main differentiator between Linear Regression :straight_ruler: and Logistic Regression :scales: is the outcome variable. In linear regression, the outcome variable is continuous, taking any value within a certain range. On the other hand, in logistic regression, the outcome variable is categorical, often binary.

Consider two scenarios:


  1. A company wants to predict sales based on advertising spend. Here, the outcome variable (sales) is continuous because it can take any value. This is a case for linear regression.

  2. A medical researcher wants to predict whether a patient has a disease based on their symptoms. Here, the outcome variable (disease) is dichotomous (Yes or No), suitable for logistic regression.


The Assumptions of Binary Logistic Regression

Binary Logistic Regression, like any other statistical model, relies on certain assumptions.


  • Absence of Multicollinearity :no_entry_sign::arrow_double_up: - The independent variables should not be too highly correlated with each other.

  • Linearity of independent variables and log odds :straight_ruler::chart_with_upwards_trend: - Although logistic regression doesn't require the dependent and independent variables to be related linearly, it requires that the independent variables are linearly related to the log odds.

  • Large Sample Size :busts_in_silhouette: - Binary logistic regression typically requires a large sample size. A minimum of 10 cases with the least frequent outcome for each independent variable in your model is recommended.


For instance, if you're studying the relationship between smoking (independent variable) and lung cancer (dependent variable), the data should not show multicollinearity, the log odds of the independent variable (smoking) should be a linear relationship, and the study should have a sufficiently large sample size.


#Example of checking assumptions of binary logistic regression

import statsmodels.api as sm

logit_model=sm.Logit(y,X)

result=logit_model.fit()

print(result.summary2()) #This will give you a summary of the regression, from which you can check multicollinearity and linearity of independent variables and log odds


Understanding Binary Logistic Regression, differentiating it from Linear Regression, and knowing its assumptions are crucial steps towards developing models using binary logistic regression and assessing their performance. By mastering these concepts, you'll be well on your way to making insightful predictions and decisions based on your datasets.


Collect and prepare the data for binary logistic regression

  • Identify the dependent variable and independent variables

  • Handle missing data and outliers

  • Transform variables if necessary (e.g., categorical variables to dummy variables)


The Art of Data Collection and Preparation for Binary Logistic Regression 👩‍🔬


One of the most crucial steps in developing binary logistic regression models is collecting and preparing the data. In the world of statistics, data is the golden ingredient 🏆, the driving force behind any analysis. The quality of your model ultimately depends on the quality of data you input into it. Remember the adage: garbage in, garbage out!


Decoding the Language of Variables: Dependent and Independent 🕵️‍♂️


In binary logistic regression, we deal with two types of variables: the dependent variable and the independent variables. The dependent variable is what you're interested in predicting or explaining. It's binary, meaning it only takes two values - usually represented as '0' and '1'.


On the other hand, independent variables are the predictors or features that help us predict or explain the dependent variable. They can be continuous or categorical variables and there can be one or many.

Imagine you're a medical researcher studying the factors that predict whether a patient will develop diabetes. Your dependent variable might be 'Develops Diabetes' (yes=1, no=0). Potential independent variables could be 'Age', 'BMI', 'Family History of Diabetes', etc.


#Example of defining dependent and independent variables in Python

dependent_variable = 'Develops Diabetes'

independent_variables = ['Age', 'BMI', 'Family History of Diabetes']


Mending the Gaps: Handling Missing Data and Outliers 🚑


In the real world, data is messy. It's common to encounter missing values and outliers, which can distort your model's performance if not addressed.

When you encounter missing data, you have several options. For instance, you might drop rows with missing values, or fill them in using a technique like mean imputation. The right choice often depends on the amount and nature of the missing data.


Outliers, or extreme values, can also wreak havoc on your model. Techniques for dealing with outliers could include winsorizing (capping extreme values), transforming the variable (e.g. using a log transformation), or using a robust regression method that's less sensitive to outliers.


#Example of handling missing data and outliers in Python

data = data.dropna() #Drop rows with missing values

data['Age'] = np.log(data['Age']) #Log transform 'Age' variable to handle outliers


Shape-Shifting Variables: Transformations and Dummy Variables 🔀


Finally, you might need to transform your variables to make them suitable for binary logistic regression. For instance, binary logistic regression assumes that the relationship between independent variables and the log odds of the dependent variable is linear. If this assumption is violated, you might transform the variable using a method like log, square root, or inverse transformation.

And what if your independent variable is categorical with more than two categories? You'll have to create what's known as dummy variables. Each category becomes a new binary variable (0 or 1). For instance, if you have a variable 'Color' with categories 'Red', 'Blue', and 'Green', you'll create three dummy variables: 'Is_Red', 'Is_Blue', and 'Is_Green'.


#Example of transforming variables and creating dummy variables in Python

data['Log_Age'] = np.log(data['Age']) #Log transform 'Age' variable

color_dummies = pd.get_dummies(data['Color'], prefix='Is') #Create dummy variables for 'Color'

data = pd.concat([data, color_dummies], axis=1) #Add dummy variables to data


Remember, proper data preparation is a labor of love that sets the foundation for your model. Pay attention to the details, and your model will thank you with improved accuracy and predictive power!


Develop a binary logistic regression model

  • Specify the model equation

  • Select the appropriate variables to include in the model

  • Assess the significance of each variable using statistical tests (e.g., Wald test)

  • Interpret the coefficients of the model


Binary Logistic Regression: A Tool for Probability Estimation 📈


Binary Logistic Regression is a type of predictive analysis technique, it's a staple in the field of statistics and data science. This technique allows us to analyze the relationship between a binary dependent variable and one or more independent variables, producing a probability estimate.


Let's dive into how to develop a binary logistic regression model and assess its effectiveness.


The Art of Specifying Model Equation 🧮


First things first, it's essential to specify the model equation for your binary logistic regression. The equation of our logistic regression would look something like this:

log(p/1-p) = β0 + β1*X1 + β2*X2 + ... + βn*Xn


where:

- log(p/1-p) is the logarithm of the odds ratio, also termed as the logit.

- β0, β1, ... βn are the coefficients of the model - parameters to be estimated

- X1, X2, ..., Xn are the independent variables.


This equation represents the underlying relationship between your independent and dependent variables. The right side of the equation is a linear combination of the input variables, and the left side is the log of odds of the binary outcome variable.


Take for instance, a company wants to predict if a customer will churn or not. The independent variables could include customer age, tenure, service usage, etc., and the dependent variable is whether the customer churns (1) or not (0).

Selecting Appropriate Variables: The Key to Powerful Models 🔑


The next step is to select the appropriate variables to include in your model. Variable selection is more of an art than a science and can significantly impact your model's performance.


Typically, you'd start with a pool of potential variables that you believe may have an influence on the outcome. From this pool, you need to decide which ones to include in your model. For instance, in the customer churn example, variables like age, tenure, and service usage might be good predictors of churn.


Statisticians often use techniques like stepwise regression, least absolute shrinkage and selection operator (LASSO), or ridge regression to help select the most suitable variables.


Assessing Variable Significance: The Wald Test 🎯


Once you've selected your variables, it's time to assess their significance using statistical tests. One popular test used in logistic regression is the Wald test.

The Wald test assesses the significance of each coefficient in your model. In essence, it tests whether the coefficient of a particular variable is significantly different from zero. If the Wald test produces a p-value less than your chosen significance level (often 0.05), then you can conclude that the variable is a significant predictor of the outcome.

Continuing the customer churn example, a significant p-value for the 'tenure' variable would suggest that tenure significantly affects the likelihood of a customer churning.


Interpreting the Coefficients: The Final Step 📚


Finally, you're ready to interpret the coefficients of your model. In a binary logistic regression model, the coefficients represent the log odds of the outcome occurring given a one-unit change in the corresponding predictor variable, holding all other predictors constant.


For instance, assuming 'tenure' is significant and has a coefficient of -0.2, it means that for every additional year of tenure, the log odds of a customer churning decrease by 0.2, assuming all other factors remain constant.


Remember, the interpretation of the coefficients in logistic regression is not as straightforward as in linear regression. The coefficients in logistic regression are in terms of log odds, and to convert them into odds or probabilities, you will need to use the exponential function (exp).


In summary, developing a binary logistic regression model involves specifying the model equation, selecting appropriate variables, assessing the significance of each variable, and interpreting the coefficients. Each of these steps is crucial for creating a robust and interpretable logistic regression model.


Assess the performance of the binary logistic regression model


  • Calculate and interpret the odds ratios and their confidence intervals

  • Evaluate the goodness of fit of the model using measures such as the Hosmer-Lemeshow test or the deviance statistic

  • Assess the predictive accuracy of the model using measures such as the area under the receiver operating characteristic (ROC) curve or the Brier score




Odds Ratios and Confidence Intervals


Did you know? Odds ratios are pivotal in understanding logistic regression models. An odds ratio is a measure of association between a certain property or characteristic and an outcome.


For instance, in a study to determine risk factors for lung cancer, an odds ratio might be used to measure the risk associated with smoking. If the odds ratio is 2, it implies that smokers are twice as likely to develop lung cancer compared to non-smokers.


In assessing your binary logistic regression model, you would calculate the odds ratios for all predictors in the model. For example, using a language like R, you would use the 'summary()' function on your model to get the odds ratios.

summary(my_logit_model)


The confidence intervals for these odds ratios can also be calculated. A 95% confidence interval, for instance, gives us a range in which we expect the true odds ratio to fall 95% of the time. This can be equally calculated using built-in functions in most statistical software.


Goodness of Fit: Hosmer-Lemeshow Test and Deviance Statistic


Checking the goodness of fit of your model is crucial in any modelling process. In binary logistic regression, the Hosmer-Lemeshow test and the deviance statistic are often used.


The Hosmer-Lemeshow test divides subjects into deciles based on predicted probabilities, then compares observed and expected event rates. A good model fit would result in a high p-value (greater than 0.05), indicating no significant difference between observed and expected event rates.


The deviance statistic, on the other hand, is a measure of the difference between the maximum achievable likelihood and the model's likelihood. A low deviance represents a good fit.


In R, you can perform the Hosmer-Lemeshow test using the 'hoslem.test()' function from the 'ResourceSelection' package, and the deviance statistic using 'with()' function.

library(ResourceSelection)

hoslem.test(your_model$fitted.values, your_actual_responses)

with(your_model, null.deviance - deviance)


Predictive Accuracy: ROC Curve and Brier Score


Lastly, evaluating predictive accuracy can help us determine how well the model is able to predict the correct class. The Receiver Operating Characteristic (ROC) curve and the Brier score are two common measures.


The ROC curve plots sensitivity against 1-specificity for every possible cutoff. The area under the ROC curve, often called the AUC, gives a single measure of predictive accuracy. An AUC of 1 means perfect prediction, whereas an AUC of 0.5 means the model is no better than random guessing.


The Brier score is another measure of accuracy, which calculates the mean squared difference between predicted probabilities and the actual outcome. A lower Brier score suggests a more accurate model.


To calculate the AUC and Brier score in R, you can use the 'roc()' function from the 'pROC' package, and 'brier.score()' function from the 'DescTools' package.

library(pROC)

roc_obj <- roc(your_actual_responses, your_model$fitted.values)

auc(roc_obj)

library(DescTools)

BrierScore(your_model$fitted.values, your_actual_responses)


Remember, the assessment of a logistic regression model's performance is not a one-size-fits-all process. Depending on your study design and research question, one measure of goodness of fit or predictive accuracy may be more appropriate than others. Always consider the context and purpose of your model when choosing which measures to use in assessment.



Validate the binary logistic regression model


  • Split the data into training and testing sets

  • Apply the model to the testing set and evaluate its performance

  • Use techniques such as cross-validation or bootstrapping to assess the stability of the model's performance


Note: The outlined concepts provide a general overview of the steps involved in developing and assessing binary logistic regression models. It is important to refer to specific software documentation and resources for detailed instructions on implementing these steps using R and Python


The Intricacies of Validating Binary Logistic Regression Models


Binary Logistic Regression 🔑 is a statistical analysis method widely used for its simplicity and effectiveness in predicting binary outcomes. At its core, it is about figuring out the probability of an event happening based on one or more independent variables. Nevertheless, even the most meticulously crafted model will be rendered useless if it fails the validation process.


Division of Data: Training and Testing Sets


In the world of data science, it is a common practice to split your data into subsets: the training set and the testing set. This practice increases the validity of your model by testing it on unseen data, thus improving its ability to generalize to new situations.


The training set is the larger subset, usually consisting of about 70-80% of your data, and it is on this set that you will build your binary logistic regression model. Meanwhile, the testing set, comprising the remaining 20-30%, is used to validate the model's performance.


Imagine we have a dataset of patients with their medical records, and we want to predict if they will have a heart disease or not based on certain features like cholesterol level, age, sex, etc.

# Assume we are using Python and have a DataFrame `df`

from sklearn.model_selection import train_test_split


# Define the features and the target

X = df.drop('heart_disease', axis=1)

y = df['heart_disease']


# Split the data

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)


In the above code, we are using a function from the sklearn library in Python to split the data into training and testing sets.


Assessing Model's Performance on the Testing Set


After fitting the model using the training set, it's time to evaluate its performance on the testing set. Commonly used evaluation metrics for binary logistic regression include accuracy, precision, recall, F1 score, and AUC-ROC. These measures offer different perspectives on the model's performance.


For instance, in the heart disease prediction example mentioned above, accuracy would tell you what proportion of patients were correctly predicted to have or not have heart disease. If you want to prioritize avoiding false negatives (i.e., predicting that a patient does not have heart disease when they actually do), you might want to consider recall as your evaluation metric.


# Assume the model is `log_model`

from sklearn.metrics import classification_report


# Apply the model to the testing set

predictions = log_model.predict(X_test)


# Evaluate the performance

print(classification_report(y_test, predictions))


In this code segment, we make predictions using our model on the testing set and then print out a report with the mentioned evaluation metrics.


Ensuring Model Stability: Cross-Validation and Bootstrapping

While assessing the model's performance on the testing set is crucial, it's not enough. We also need to ensure the model's stability, i.e., its performance should not significantly vary with different data samples.

Enter Cross-Validation 🔑 and Bootstrapping 🔑. These are robust techniques to assess the model's stability. Cross-validation involves dividing the data into 'k' subsets, then training the model 'k' times, each time using a different subset as the testing set and the remaining data as the training set. On the other hand, bootstrapping involves sampling the data with replacement and training the model on each sample.


Both techniques provide a more robust measure of the model's performance and help to prevent overfitting by ensuring the model performs well on different samples of the data.

# Assume we are using Python and have a DataFrame `df`

from sklearn.model_selection import cross_val_score


# Perform cross-validation

scores = cross_val_score(log_model, X, y, cv=5)


# Print the scores

print('Cross-Validation Accuracy Scores: ', scores)


In this code, we are performing 5-fold cross-validation on our binary logistic regression model and then printing the accuracy scores for each fold.

By paying attention to these steps and techniques for validating binary logistic regression models, we can build models that are not only accurate but also robust and reliable.


UE Campus

UE Campus

Product Designer
Profile

Class Sessions

1- Introduction 2- Import and export data sets and create data frames within R and Python 3- Sort, merge, aggregate and append data sets. 4- Use measures of central tendency to summarize data and assess symmetry and variation. 5- Differentiate between variable types and measurement scales. 6- Calculate appropriate measures of central tendency based on variable type. 7- Compare variation in two datasets using coefficient of variation. 8- Assess symmetry of data using measures of skewness. 9- Present and summarize distributions of data and relationships between variables graphically. 10- Select appropriate graph to present data 11- Assess distribution using Box-Plot and Histogram. 12- Visualize bivariate relationships using scatter-plots. 13- Present time-series data using motion charts. 14- Introduction 15- Statistical Distributions: Evaluate and analyze standard discrete and continuous distributions, calculate probabilities, and fit distributions to observed. 16- Hypothesis Testing: Formulate research hypotheses, assess appropriate statistical tests, and perform hypothesis testing using R and Python programs. 17- ANOVA/ANCOVA: Analyze the concept of variance, define variables and factors, evaluate sources of variation, and perform analysis using R and Python. 18- Introduction 19- Fundamentals of Predictive Modelling. 20- Carry out parameter testing and evaluation. 21- Validate assumptions in multiple linear regression. 22- Validate models via data partitioning and cross-validation. 23- Introduction 24- Time Series Analysis: Learn concepts, stationarity, ARIMA models, and panel data regression. 25- Introduction 26- Unsupervised Multivariate Methods. 27- Principal Component Analysis (PCA) and its derivations. 28- Hierarchical and non-hierarchical cluster analysis. 29- Panel data regression. 30- Data reduction. 31- Scoring models 32- Multi-collinearity resolution 33- Brand perception mapping 34- Cluster solution interpretation 35- Use of clusters for business strategies 36- Introduction 37- Advance Predictive Modeling 38- Evaluating when to use binary logistic regression correctly. 39- Developing realistic models using functions in R and Python. 40- Interpreting output of global testing using linear regression testing to assess results. 41- Performing out of sample validation to test predictive quality of the model Developing applications of multinomial logistic regression and ordinal. 42- Selecting the appropriate method for modeling categorical variables. 43- Developing models for nominal and ordinal scaled dependent variables in R and Python correctly Developing generalized linear models . 44- Evaluating the concept of generalized linear models. 45- Applying the Poisson regression model and negative binomial regression to count data correctly. 46- Modeling 'time to event' variables using Cox regression. 47- Introduction 48- Classification methods: Evaluate different methods of classification and their performance in order to design optimum classification rules. 49- Naïve Bayes: Understand and appraise the Naïve Bayes classification method. 50- Support Vector Machine algorithm: Understand and appraise the Support Vector Machine algorithm for classification. 51- Decision tree and random forest algorithms: Apply decision trees and random forest algorithms to classification and regression problems. 52- Bootstrapping and bagging: Analyze the concepts of bootstrapping and bagging in the context of decision trees and random forest algorithms. 53- Market Baskets: Analyze transaction data to identify possible associations and derive baskets of associated products. 54- Neural networks: Apply neural networks to classification problems in domains such as speech recognition, image recognition, and document categorization. 55- Introduction 56- Text mining: Concepts and techniques used in analyzing unstructured data. 57- Sentiment analysis: Identifying positive, negative, or neutral tone in Twitter data. 58- SHINY package: Building interpretable dashboards and hosting standalone applications for data analysis. 59- Hadoop framework: Core concepts and applications in Big Data Analytics. 60- Artificial intelligence: Building simple AI models using machine learning algorithms for business analysis. 61- SQL programming: Core SQL for data analytics and uncovering insights in underutilized data. 62- Introduction 63- Transformation and key technologies: Analyze technologies driving digital transformation and assess the challenges of implementing it successfully. 64- Strategic impact of Big Data and Artificial Intelligence: Evaluate theories of strategy and their application to the digital economy, and analyze. 65- Theories of innovation: Appraise theories of disruptive and incremental change and evaluate the challenges of promoting and implementing innovation. 66- Ethics practices and Data Science: Assess the role of codes of ethics in organizations and evaluate the importance of reporting. 67- Introduction 68- Introduction and Background: Provide an overview of the situation, identify the organization, core business, and initial problem/opportunity. 69- Consultancy Process: Describe the process of consultancy development, including literature review, contracting with the client, research methods. 70- Literature Review: Define key concepts and theories, present models/frameworks, and critically analyze and evaluate literature. 71- Contracting with the Client: Identify client wants/needs, define consultant-client relationship, and articulate value exchange principles. 72- Research Methods: Identify and evaluate selected research methods for investigating problems/opportunity and collecting data. 73- Planning and Implementation: Demonstrate skills as a designer and implementer of an effective consulting initiative, provide evidence of ability. 74- Principal Findings and Recommendations: Critically analyze data collected from consultancy process, translate into compact and informative package. 75- Understand how to apply solutions to organisational change. 76- Conclusion and Reflection: Provide overall conclusion to consultancy project, reflect on what was learned about consultancy, managing the consulting. 77- Handle and manage multiple datasets within R and Python environments.
noreply@uecampus.com
-->