{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"id": "086c3171",
"metadata": {
"tags": [
"remove-input"
]
},
"outputs": [],
"source": [
"# If you are reading this notebook on a Binder, make sure to right-click the file name in\n",
"# the left-side file viewer and select `Open With > Notebook` to view this as a notebook"
]
},
{
"cell_type": "markdown",
"id": "def3fc60",
"metadata": {},
"source": [
"# Ensemble Methods\n",
"\n",
"Last time, we discussed the details of training and evaluating our decision tree model. In this chapter, we will discuss the powerful concept of **ensemble models**, or models composed of a group of smaller models. The idea is that maybe in combination, models that individually don't perform well individually, might perform better collectively. This follows an observation of \"the wisdom of the crowd\" where aggregate judgements often can outperform judgements of even an informed individually. One example of the wisdom-of-the-crowd comes from Francis Galton's observation of game at a fair in 1906 where 787 participants tried to guess the weight of a cow. This group of people included experts in cows such as farmers and butchers as well as regular folk who all guessed its weight. Surprisingly, the best guess for the weight was actually the *average guess* across all 787 participants; the average guess was 1196 lbs while the true weight was 1198 lbs. This average guess was much closer than any individual's guess, including the guesses of the experts.\n",
"\n",
"Applying this idea to machine learning, we can hope to make a more accurate model by combining the guesses of individual models that might not be as effective on their own. There are many different approaches to how to combine models of various types into an ensemble, but they generally fall into three categories that we will explore in this chapter.\n",
"\n",
"* **Stacking**\n",
"* **Bagging**\n",
"* **Boosting**\n",
"\n",
"Note that these are broad classes that describe many types of models. In this chapter, we will primarily only explore one type of model for each category.\n",
"\n",
"## Aside: Data Types\n",
"\n",
"Before diving into the various types of ensemble models, we want to take a step back to have a discussion about the different types of data we will train our models on. This discussion has nothing to do with ensemble models directly, but is a more general concept when thinking of our machine learning pipeline.\n",
"\n",
"Data that we receive can be varied in their formats and types. Very generally, there are two extremely common types of data that we see especially in most of the tabular data we have been working with (each with their own subtypes).\n",
"\n",
"* **Numeric Data** describes data representing numbers (quantitative)\n",
"* **Categorical Data** describes data representing distinct categories (qualitative)\n",
"\n",
"### Numeric Data\n",
"\n",
"Data describing numbers generally comes in one of two types.\n",
"\n",
"* **Discrete** values that cannot be subdivided. For example, in our house price predicting example the features representing the number of bedrooms were integers (1, 2, 3, ...) that cannot be subdivided further.\n",
"* **Continuous** values can be subdivided. For example, the area of a house is a real number that can be infinitesimally subdivided (assuming precise enough measurements).\n",
"\n",
"There is a tricky case of numeric values such as a price of the house. While it seems more like a continuous value, you can only break the units so far down to cents before not being able to go further. You generally have to consider if you would like to treat these variables as discrete or continuous. A general rule of thumb is if the discreteness comes from the unit of measurement (a bedroom has to be a whole number), then it should be discrete but if the discreteness comes from the quantity being measured (prices can only be broken down to cents), you can treat it as continuous. You will likely just need an extra step to round outputs to be the precision that is appropriate for the problem.\n",
"\n",
"All of the ML models that we have described so far assume numeric inputs so very rarely is there preprocessing you need to do to make numeric data work. You may have to consider things like rounding as we described above based on the limitations of the numeric data precision, but generally that's not something you have to worry too much about.\n",
"\n",
"### Categorical Data\n",
"\n",
"Data describing categories also generally comes in one of two types.\n",
"\n",
"* **Ordinal** data is categorical data that has a defined order. For example, a school rating of good/okay/bad has a clear ordering of which category can be considered greater than another.\n",
"* **Nominal** data is categorical data that does not have a defined order. For example, the type of school you attend being public/private/charter/homeschool does not have an ordering of which one of those school is greater than another (although everyone might have an opinion about which one may be best).\n",
"\n",
"```{margin}\n",
"{{ref_sklearn_trees}}\\. While this is true in theory, `sklearn` does not implement their tree like this so you actually do have to transform the values when using that library for all model types.\n",
"```\n",
"\n",
"As we mentioned before, all of the ML models we have discussed so far (with the exception of Decision Trees^{{{ref_sklearn_trees}}}) have assumed the inputs are numeric values. That means in order to train these models on data with categorical features, we have to do some preprocessing.\n",
"\n",
"How might we go about transforming categorical variables into numeric ones? One natural idea is to use what we might call a *value encoding* to create a mapping from each category to a number. So for example of good/okay/bad, we could make the following mapping:\n",
"\n",
"* Good = 1\n",
"* Okay = 0\n",
"* Bad = -1\n",
"\n",
"This actually works fine in practice with ordinal data, if we choose our mappings to respect the ordering of the categories. However, this setup doesn't work at all with nominal values. Consider our example of a category for school type with the values public/private/charter/homeschool. We could come up with a value encoding such as the following.\n",
"\n",
"* Public = 1\n",
"* Private = 2\n",
"* Charter = 3\n",
"* Homeschool = 4\n",
"\n",
"By choosing this encoding for nominal values though has now introduced some problems in our data that we might not want. In particular:\n",
"\n",
"* We have now defined an implicit ordering between the categories (Homeschool > Public) even though as nominal values, they are not supposed to have such an ordering.\n",
"* We have also added unintended relationships between how our model may consider the various feature values. Since they are just numbers to the model, the model would expect this feature to behave like any numeric feature would. That means you would expect mathematical statements such as Public (1) + Charter (3) = Homeschool (4) or Private (2) * Private (2) - Public (1) = Charter (3). These spurious numeric relationships are a byproduct of how we expect numbers to behave, so representing nominal values in this way will create completely unexpected relationships in our data.\n",
" * This technically is also a critique of using value encodings for ordinal data, but because many models might only care about the relative ordering of feature values it generally doesn't cause problems.\n",
"\n",
"To fix this, we will need a slightly more complicated encoding. One of the most common encoding types is a **one-hot encoding** that we briefly showed in a code example in the last chapter. A one-hot encoding turns a categorical feature into many categorical features, one for each value that feature took on, and indicates with a 1 if that example had that feature as a value. This is clearer with an example.\n",
"\n",
"Consider a small school dataset with the following columns.\n",
"\n",
"```{table} Raw School Dataset\n",
"\n",
"| School | Sq. Ft. | Rating |\n",
"|---------|---------|--------|\n",
"| Public | 1000 | Good |\n",
"| Private | 1500 | Bad |\n",
"| Charter | 700 | Good |\n",
"| Private | 1200 | Good |\n",
"```\n",
"\n",
"```{table} One-Hot Encoded Dataset\n",
"\n",
"| School - Public | School - Private | School - Charter | Sq. Ft. | Rating - Good | Rating - Bad |\n",
"|------------------|------------------|------------------|---------|---------------|--------------|\n",
"| 1 | 0 | 0 | 1000 | 1 | 0 |\n",
"| 0 | 1 | 0 | 1500 | 0 | 1 |\n",
"| 0 | 0 | 1 | 700 | 1 | 0 |\n",
"| 0 | 1 | 0 | 1200 | 1 | 0 |\n",
"```\n",
"\n",
"## Stacking\n",
"\n",
"**Stacking** involves taking the predictions of various model types and combining them into an overall model by weighting their responses. For a concrete example, we might train three different models on a training set such as:\n",
"\n",
"* A Logistic Regression Model\n",
"* A Decision Tree Model\n",
"* A Neural Network\n",
"\n",
"Each model will make a prediction $\\hat{y}_j$ and these outputs are used as an input for another model type (usually Linear Regression for regression and Logistic Regression for classification) to synthesize an overall answer based on weighting their individual guesses.\n",
"\n",
"```{figure} stacking.png\n",
"---\n",
"alt: A visual depiction of stacked models. The inputs go to three models (Decision Tree, Logistic Regression, Neural Networks) which go to the Stacked Model with various weights.\n",
"width: 80%\n",
"align: center\n",
"---\n",
"A Stacked model with 3 models in the ensemble each with weight $w_1, w_2, w_3$\n",
"```\n",
"\n",
"We won't have much to say about this particular ensemble, as it is often much more of a heuristic of which models to use and learning the weights between them is simply training a model using their outputs as the overall-models inputs.\n",
"\n",
"## Bagging - Random Forests\n",
"\n",
"A **Random Forest** is a specific type of ensemble model that leverages the concept of **bagging**. Let's first discuss the idea of this specific model before defining what bagging means in general. A Random Forest is an ensemble model composed of $T$ Decision Trees. Each Decision Tree casts a \"vote\" for a particular label, and the ensemble combines all of the votes to make an overall prediction. For classification tasks, the votes are counted and the majority label is predicted. For regression the average label is predicted.\n",
"\n",
"```{figure} random_forest.png\n",
"---\n",
"alt: A visual depiction of a random forest. Each tree casts a vote and the majority class is the overall ensemble's prediction\n",
"width: 100%\n",
"align: center\n",
"---\n",
"A Random Forest, where each tree gets to cast a vote for the predicted label.\n",
"```\n",
"\n",
"A natural question is if we only have one training set, how can we learn a collection of $T$ trees. Clearly each tree would need to be different somehow. If they were all the exact same model and all made the exact same predictions, an ensemble of clones would make the exact same decisions as the individuals. So how can we create differences in the tree?\n",
"\n",
"We accomplish this by creating sampled datasets by **bootstrapping** our original dataset. Bootstrapping is the process of randomly sampling from our dataset, with replacement, to make new versions of our dataset that are slightly different than the original. We make a bootstrapped version of our dataset for each of the $T$ trees we want to train, such that each dataset each tree trains on is now a random sample (with replacement) of our original dataset. A couple of key details about this sampling procedure.\n",
"\n",
"```{margin}\n",
"{{ref_sampling}}\\. Think of a simplified example of randomly pulling three marbles from a bag that contains one red marble, one white marble, and one blue marble. If you draw the three marbles, replacing them after you pull them out, it's entirely possible to draw the red one twice and the blue one once; thus leaving the white marble out of your set of 3 you drew.\n",
"```\n",
"\n",
"* Each bootstrapped dataset will have $n$ examples like the original dataset. But because we sample with replacement, some examples from the original will be left out since we will not choose some by chance^{{{ref_sampling}}}.\n",
"* We also select some number of features $m < D$ to randomly select that number of features for each bootstrapped sample. That means each tree is only trained on a subset of sized $m$ of our original features (Not shown in the figure below). We'll discuss this hyperparameter later.\n",
"\n",
"```{figure} bootstrapping.png\n",
"---\n",
"alt: \"Bootstrapping data for each tree by randomly selecting examples with replacement. Not shown: Also would randomly select features.\"\n",
"width: 60%\n",
"align: center\n",
"---\n",
"Bootstrapping data for each tree by randomly selecting examples with replacement. Not shown: Also would randomly select features.\n",
"```\n",
"\n",
"Random Forests are a specific type of ensemble model known as a **Bagging Ensemble**. Bagging stands for \"Bootstrapped aggregation\" which comes from the fact that we are aggregating predictions over an ensemble of models trained on bootstrapped datasets.\n",
"\n",
"So far, we have discussed our Random Forest is a collection of Decision Trees trained on random bootstraps of the original data. One important, but counter-intuitive, detail about these trees is we will also train them to grow *without any limit on how tall they can become*. This is precisely because we actually *want* each tree to overfit on the dataset it is trained on. It's weird that after a whole book of trying to prevent overfitting, we are actually going to encourage overfitting in this context.\n",
"\n",
"The \"why\" of this comes from the property that models that overfit are generally high variance. Recall from our bias-variance tradeoff, that high variance means a model will change wildly even to minor changes in the data (thus likely to overfit). Also from that discussion, models that have high variance generally also have low bias. Remember that we defined low bias to mean that on average (over all models you could learn from all possible versions of your dataset), we will be learning the true function. This fact is exactly what we are taking advantage of in an ensemble like a Random Forest! By letting each tree overfit and averaging over all of them, our hope is that this \"average model\" will be close to the true function since the individual overfit trees have low bias.\n",
"\n",
"### Random Forest Algorithm\n",
"\n",
"```{prf:algorithm} Random Forest Training\n",
":label: random_forest_training\n",
"\n",
"**Input**: Training Dataset $D$. Hyperparameters: $T$ number of trees and $m$ number of features to select for each sample\n",
"\n",
"**Output**: An ensemble of $T$ trees $\\hat{F}$\n",
"\n",
"1. For $i \\in [1, ... T]$:\n",
" 1. $D' = bootstrap(D, m)$ to randomly sample (with replacement) $n$ rows and $m$ columns from $D$\n",
" 2. Train a tree $\\hat{t}_i$ with no height limit on the data $D$ (only with $m$ features. The hope is these trees will overfit to the bootstrapped samples they are trained on.\n",
" * Note that each tree needs to remember which $m$ features it trained on for prediction later on\n",
"2. Return ensemble of $T$ trees\n",
"```\n",
"\n",
"```{prf:algorithm} Random Forest Prediction\n",
":label: random_forest_prediction\n",
"\n",
"**Input**: An ensemble of $T$ trained trees $\\hat{F}$, and an input $x$ to make a prediction on\n",
"\n",
"**Output**: A prediction $\\hat{y} = \\hat{F}(x)$ for input $x$\n",
"\n",
"1. For $\\hat{t}_i \\in \\hat{F}$:\n",
" 1. $x' = x$ with only the $m$ features $t_i$ trained on\n",
" 2. $\\hat{y}_i = \\hat{t}_i(x)$\n",
"2. Return aggregate of all $\\hat{y}_i$\n",
" * For regression: $\\hat{y} = \\frac{1}{T} \\sum_{i=1}^T \\hat{y}_i$\n",
" * For classification: $\\hat{y} = majority(\\{\\hat{y}_i\\}_{i=1}^n)$\n",
"```\n",
"\n",
"### Random Forest Properties\n",
"\n",
"When thinking about Random Forests, the following bullet points of important properties this model has in general is useful to keep in mind. Like always, these are just generalizations and it doesn't mean you always should/shouldn't use these models in a specific context. But they are useful rules of thumb to know.\n",
"\n",
"* Random Forests use overfitting to their advantage by averaging out over many overfit trees. Averaging is a great variance reduction technique in general that this ensemble employs.\n",
"`````{div} full-width\n",
"\n",
"````{sidebar}\n",
"```{figure} kinect.png\n",
"---\n",
"alt: Screenshot of a paper showing how to highlight pose estimates from a 3D scan of body position\n",
"width: 100%\n",
"align: center\n",
"---\n",
"[Example paper](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/BodyPartRecognition.pdf) using Random Forests to estimate poses from a depth image\n",
"```\n",
"````\n",
"* Random Forests are versatile models that generally work pretty well in many contexts. Can use them in all sorts of settings such as regression, classification, clustering (not discussed), identifying important features.\n",
"* Random Forests are generally low maintenance models. That means you actually don't need to do much careful hyperparameter tuning (except for the number of features to sample $m$). Because we are averaging over all of these trees, we generally see improved generalization accuracy as we increase the number of trees $T$. More trees will take longer to train, but we don't really need to worry about overfitting by adding too many trees to our ensemble. While there are some hyperparameters to tune, they generally have a smaller effect on the model's performance. Because of this, Random Forests are often seen as a good \"out of the box\" model.\n",
"`````\n",
"* Random Forests are pretty efficient to train. Because we are sampling only $m$ features, training each tree is generally pretty fast (for reasonable $m << d$). In addition, the training of each tree is independent of the other. This means we can leverage concepts of *parallelization* to train trees on different CPUs to speed up our training time. If you wanted, you could buy a bunch of compute time from Amazon Web Services (AWS) and train one tree on each of the computers you rent to save you a lot of time!\n",
"\n",
"### Random Forest Code\n",
"\n",
"Training a Random Forest in scikit-learn is quite easy, and is similar to many of the models we have seen previously. The following code block shows how to train a Random Forest classifier on the income dataset we saw in the last chapter ([income.csv](./income.csv))."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "61ba5491",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Train Accuracy: 0.9999616093366094\n",
"Test Accuracy: 0.8576692768309535\n"
]
}
],
"source": [
"import pandas as pd\n",
"\n",
"from sklearn.metrics import accuracy_score\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.preprocessing import OneHotEncoder\n",
"from sklearn.ensemble import RandomForestClassifier\n",
"\n",
"# Load in data, and separate features and label\n",
"data = pd.read_csv(\"income.csv\")\n",
"label = \"income\"\n",
"features = data.columns[data.columns != label]\n",
"\n",
"# Train test split\n",
"train_data, test_data, train_labels, test_labels = train_test_split(\n",
" data[features], data[label], test_size=0.2)\n",
"\n",
"# Transform categorical features. Note that we use the same transformation\n",
"# on both train and test\n",
"encoder = OneHotEncoder(handle_unknown='ignore')\\\n",
" .fit(train_data)\n",
"train_data = encoder.transform(train_data)\n",
"test_data = encoder.transform(test_data)\n",
"\n",
"# Train model\n",
"model = RandomForestClassifier(n_estimators=200)\n",
"model.fit(train_data, train_labels)\n",
"\n",
"# Make predictions\n",
"train_predictions = model.predict(train_data)\n",
"test_predictions = model.predict(test_data)\n",
"\n",
"print(\"Train Accuracy:\", accuracy_score(train_labels, train_predictions))\n",
"print(\"Test Accuracy:\", accuracy_score(test_labels, test_predictions))"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "29efa22d",
"metadata": {
"mystnb": {
"image": {
"align": "center",
"alt": "A graph showing improved training/test accuracy as number of trees increase",
"width": "75%"
}
},
"tags": [
"remove-input",
"remove-output"
]
},
"outputs": [],
"source": [
"# import matplotlib.pyplot as plt\n",
"#\n",
"# MAX_TREES = 300\n",
"#\n",
"# num_trees = list(range(1, MAX_TREES + 1, 5))\n",
"# train_accs = []\n",
"# test_accs = []\n",
"#\n",
"# model = RandomForestClassifier(warm_start=True) # Use last iteration to start next\n",
"#\n",
"# for i in num_trees:\n",
"# model.set_params(n_estimators=i)\n",
"#\n",
"# model.fit(train_data, train_labels)\n",
"#\n",
"# train_predictions = model.predict(train_data)\n",
"# test_predictions = model.predict(test_data)\n",
"#\n",
"# train_accs.append(accuracy_score(train_labels, train_predictions))\n",
"# test_accs.append(accuracy_score(test_labels, test_predictions))\n",
"#\n",
"# # Plot results\n",
"# fig, ax = plt.subplots(1)\n",
"# ax.plot(num_trees, train_accs, label=\"Train\")\n",
"# ax.plot(num_trees, test_accs, label=\"Test\")\n",
"#\n",
"# ax.set_title(\"Improved Performance when Adding Trees to Random Forest\")\n",
"# ax.set_xlabel(\"Num Trees\")\n",
"# ax.set_ylabel(\"Accuracy (clipped)\")\n",
"# ax.legend()"
]
},
{
"cell_type": "markdown",
"id": "8c40eebb",
"metadata": {},
"source": [
"### Practicalities\n",
"\n",
"We conclude this section with a few practical details about using Random Forests.\n",
"\n",
"#### Number of features $m$\n",
"\n",
"While adding more trees will generally improve the Random Forest model, that doesn't mean you don't have to do any hyperparameter tuning. Recall that in our bootstrapping step, we randomly select $n$ rows and $m$ features, where $n$ is the number of training points in the original dataset and $m$ is a hyperparameter. Randomly selecting features is important for the overall ensemble being better since it de-correlates all of the trees in the ensemble. In general, you have to tune this value for $m$ by comparing the performance of various models with different settings on $m$ with a validation set or cross validation. The authors of the original Random Forest paper suggest $m = \\sqrt{D}$ features for each sample in regression tasks and $m = \\lfloor D/3 \\rfloor$ for classification tasks. It's unclear how effective those actually are, but they are starting points that you can then tune from.\n",
"\n",
"#### Out of Bag (OOB) Error\n",
"\n",
"One useful feature of Random Forests is that you can actually get an estimate of future performance of the ensemble without a test set and only using the training set! While that seems surprising, we are able to do this by relying on the fact that not every tree in the ensemble saw every example from the training set. Because each tree was trained on a bootstrapped sample of the training sets, some of the examples will be left out from each trees bootstrapped dataset. If we want to estimate future error, we can do so by asking each tree to make predictions on the training points it *didn't* train on.\n",
"\n",
"This concept is called the **Out of Bag Error** (OOB Error). To calculate it we have the ensemble make a prediction on every training point, but only allow the trees that didn't train on that training point to have a vote in the decision. Then we just collect all of these ensemble predictions over our training set and compute whichever quality metric we are using on the predictions and true labels.\n",
"\n",
"## Boosting - AdaBoost\n",
"\n",
"In this last section of the chapter, we will introduce another type of ensemble model that is also a collection of trees, but the details of how it works are quite different. We will explore a particular learning algorithm called **AdaBoost** that is an example of our final type of ensemble, a **boosted ensemble**. Before defining what these terms are, let's explore a bit of the history that led to the model's inception.\n",
"\n",
"Many machine learning researchers aer interested in the theoretical limits of ML models: what they can or can't learn, what guarantees we can make about a model's performance, and many other important questions. One example problem was asking if an ensemble could be constructed from ineffective models to make a better ensemble. A **weak learner** is a model that does only slightly better than random guessing at a task. Kearns and Valient (1988, 1989) asked if a set of weak learners in an ensemble could be created to make a strong learner. In 1990, Schapire found that this could be done with a model he called the AdaBoost model.\n",
"\n",
"**AdaBoost** is an ensemble of decision trees much like Random Forests, that has three notable differences that impact how we train it. We'll present these differences at a high-level before diving into details with an example.\n",
"\n",
"1. Instead of using tall decision trees that overfit to the data, we will limit the models in the ensemble to *decision stumps* (one branch).\n",
"2. Instead of doing a majority vote over the models in the ensemble, each model will be assigned a weight and we take a *weighted majority vote*. For example, if we are in a binary classification setting where $y \\in \\{+1, -1\\}, we will make predictions as follows where $\\hat{f}_t(x)$ is the prediction of decision stump $t$ and $\\hat{w}_t$ is that models weight for the majority vote.\n",
"\n",
"$$\\hat{y} = \\hat{F}(x) = sign\\left( \\sum_{t=1}^T \\hat{w}_t \\hat{f}_t(x) \\right)$$\n",
"3. Instead of bootstrapping datasets for each model in the ensemble, we will use the whole dataset to train each decision stump. To add variation between stumps, we will add a notion of *datapoint weights* $\\alpha_i$ and find decision stumps that minimize a notion of *weighted classification error*.\n",
"\n",
"\n",
"### AdaBoost Predictions\n",
"\n",
"Before discussing all of the details behind these differences, let's see an example AdaBoost model and how it makes predictions. In the figure below, we have an AdaBoost model with four decision stumps and which each model predicts on some example point $x$.\n",
"\n",
"```{figure} adaboost.png\n",
"---\n",
"alt: Four decision stumps in an AdaBoost model with various model weights (explained below)\n",
"width: 100%\n",
"align: center\n",
"---\n",
"Example AdaBoost ensemble\n",
"```\n",
"\n",
"To find the ensemble's prediction for the input $x$, we have to first get the prediction from each stump in the ensemble and then combine their predictions with their weights to get a weighted majority vote.\n",
"\n",
"$$\\hat{y} &= \\hat{F}(x)\\\\\n",
" &= sign\\left( \\sum_{t=1}^T \\hat{w}_t \\hat{f}_t(x) \\right)\\\\\n",
" &= sign\\left(2 \\cdot 1 + (-1) \\cdot (-1) + 1.5 \\cdot (-1) + 0 \\cdot (-1)\\right)\\\\\n",
" &= sign(1.5)\\\\\n",
" &= +1 $$\n",
"\n",
"### AdaBoost Training\n",
"\n",
"So as we mentioned before, our training procedure for AdaBoost is going to be quite different than it was for Random Forests. In AdaBoost, we will be training each model *in succession*, where we will use the errors the previous model made to influence how the next model is trained. This process is a specific example of the general concept of **boosting** in ensembles, where future models are trained based on the results of previous models.\n",
"\n",
"To do this, we will keep track of two sets of weights for AdaBoost:\n",
"\n",
"* Model weights $\\hat{w}_t$ that we will use to weight the predictions from each model. These are the weights discussed in the last section. The intuition for how we will compute these weights is that a more accurate model in our ensemble should have a higher weight in the ensemble.\n",
"* Dataset weights $\\alpha_i$ that will influence how we train each model. The intuition is that we will want to put more emphasis on examples with more weight, and put more weight on examples that are often misclassified.\n",
"\n",
"So at a high-level, our AdaBoost training algorithm will have the following steps. We will repeat this algorithm at the end of the section with all of the details filled in.\n",
"\n",
"```{prf:algorithm} AdaBoost Training\n",
":label: adaboost_training_1\n",
"\n",
"**Input**: Training Dataset $X \\in \\mathbb{R}^{n\\times d}, y \\in \\{0, 1\\}$. Hyperparameters: Number of trees $T$\n",
"\n",
"**Output**: An AdaBoost ensemble of $T$ trees $\\hat{F}$\n",
"\n",
"1. For $i \\in [1, ... T]$:\n",
" 1. Learn $\\hat{f}_t(x) based on current dataset weights $\\alpha_{i,t}$\n",
" 2. Compute model weight $\\hat{w}_t$ for learned model\n",
" 3. Update dataset weights $\\alpha{i, t+1}$\n",
"2. Return AdaBoost ensemble with $T$ trees and model weights $\\{\\hat{w}_t\\}_{t=1}^T$\n",
"```\n",
"\n",
"In the following sub-sections, we will explain each of these bullet points in detail.\n",
"\n",
"#### Learning from Weighted Data\n",
"\n",
"A key part of the AdaBoost algorithm is associated a weight to each example in our training set. At a high level, we will be updating weights to increase the weights of examples we get wrong and decreasing the weights for examples we get right. But how do we utilize those weights?\n",
"\n",
"```{table} Example Dataset Weights for a small cancer dataset\n",
":name: example_weights\n",
"\n",
"| TumorSize | IsSmoker | Malignant (y) | Weight |\n",
"|-----------|----------|---------------|--------|\n",
"| Small | No | No | 0.5 |\n",
"| Small | Yes | Yes | 1.2 |\n",
"| Large | No | No | 0.3 |\n",
"| Large | Yes | Yes | 0.5 |\n",
"| Small | Yes | No | 3.3 |\n",
"```\n",
"\n",
"Instead of finding a Decision Stump to minimize classification error, we we will find the Decision Stump that minimizes a **weighted classification error**. The intuition is we care about the fraction of the training dataset weight that we label incorrectly, and want to minimize this weighted error. Another intuition is if an example has weight $\\alpha_{i, t} = 2$, then making a mistake on that example is twice as bad as making a mistake on an example with weight 1.\n",
"\n",
"$$WeightedError(f_t) = \\frac{\\sum_{i=1}^n \\alpha_{i,t} \\cdot \\indicator{\\hat{f_t}(x) \\neq y_i}}{\\sum_{i=1}^n \\alpha_{i,t}}$$\n",
"\n",
"So our decision stump learning algorithm is mostly the same, but now we try to find the stump with the lowest weighted error. That also means when it comes to deciding the prediction for a leaf node, we will predict the class with the largest weight at that leaf instead of the highest number of examples. Consider two possible splits for a decision stump on {numref}`example_weights`.\n",
"\n",
"```{figure} weighted_stumps.png\n",
"---\n",
"alt: Two decision stumps, one split on TumorSize and the other IsSmoker, and their predictions\n",
"width: 100%\n",
"align: center\n",
"---\n",
"Two possible Decision Stumps based on weighted error. Note that the right stump predicts No for both branches because each leaf node has more weight on the No class.\n",
"```\n",
"\n",
"To compute the weighted error for each stump, we compute the fraction of the total weight that gets misclassified.\n",
"\n",
"* For the first stump split on TumorSize, the weighted error is\n",
" $$\\frac{0.3 + 1.2}{5.8} \\approx 0.26$$\n",
"* For the second stump split on IsSmoker, the weighted error is\n",
" $$\\frac{1.7 + 0}{5.8} \\approx 0.29$$\n",
"\n",
"Since the first stump has lower weighted error, that's the one we would choose with these weights.\n",
"\n",
"A similar procedure for finding the best split of a numeric feature can also be used, where we decide the threshold that has the lowest weighted error.\n",
"\n",
"#### Model Weights $\\hat{w}_t$\n",
"\n",
"Now that we have a procedure to train one of the Decision Stumps in our ensemble, we can now compute the next weights outlined in {prf:ref}`adaboost_training_1`: $\\hat{w}_t$.\n",
"\n",
"Our intuition for these model weights was to assign more weight to models that are more accurate, and less weight to models that are less accurate. Without proof, we will just show that the following formula works well for model weights.\n",
"\n",
"$$\\hat{w}_t = \\frac{1}{2} \\ln\\left( \\frac{1 - WeightedError(\\hat{f}_t)}{WeightedError(\\hat{f}_t)}\\right)$$\n",
"\n",
"If you plug in the weighted error from our last example (0.26), the model weight for that model would be\n",
"\n",
"$$\\hat{w}_t = \\frac{1}{2} \\ln\\left( \\frac{1 - WeightedError(\\hat{f}_t)}{WeightedError(\\hat{f}_t)}\\right) = \\frac{1}{2}\\ln\\left(\\frac{1 - 0.26}{0.26} \\right) \\approx 0.52$$\n",
"\n",
"In the following plot, we show that this formula has the desired property of assigning more weight to more accurate models and less weight to less accurate models."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f74414a1",
"metadata": {
"mystnb": {
"image": {
"align": "center",
"width": "75%"
}
},
"tags": [
"hide-input",
"remove-stderr"
]
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/tmp/ipykernel_144/1356952037.py:5: RuntimeWarning: divide by zero encountered in log\n",
" model_weights = np.log((1 - errors) / errors) / 2\n"
]
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAjsAAAHHCAYAAABZbpmkAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAABX30lEQVR4nO3dd1xT5/4H8E9YYQZENkQQcFRRURQUB9K6rdrW1nXrqrXL2upt7bz32t4O7Lj9abWtta2j06qt2rqqVnHgHlgniqAiG2SPAMnz+wNJjaASRk4SPu/XKy9feXLOyTeHg/nwnOc5RyaEECAiIiIyUxZSF0BERETUnBh2iIiIyKwx7BAREZFZY9ghIiIis8awQ0RERGaNYYeIiIjMGsMOERERmTWGHSIiIjJrDDtERERk1hh2yGzJZDK89dZbeq935coVyGQyrFy5sslraoi33noLMpmsUevm5OQ0aU2xsbGQyWRYt25dk263uU2bNg0BAQFSl2FQjTmea9b9+OOPm74wPQQEBGDatGmS1kCmjWGHmtXKlSshk8kgk8mwf//+Wq8LIaBUKiGTyfDggw9KUGHDHDlyBDKZDP/3f/9X67UxY8ZAJpNhxYoVtV4bMGAAfH19DVGi3t5//31s2LBB6jJajE6dOqFbt2612tevXw+ZTIaoqKhary1fvhwymQzbt283RIl62bJlS4P+uGhKNf/X1PV45plnJK2NpMWwQwZha2uLH3/8sVb7nj17cP36dcjlcgmqargePXrA3t6+zgB34MABWFlZIS4uTqe9oqICR48eRd++ffV6r3/9618oKytrVL31wbBjWP369cOZM2dQUFCg0x4XFwcrKyscPXoUlZWVtV6ztLREnz596v0+/v7+KCsrw+TJk5uk7jvZsmUL3n777WZ9j/oYPHgwvvvuu1qPJ554QurSSEIMO2QQI0aMwNq1a1FVVaXT/uOPPyIsLAxeXl4SVdYwVlZWiIiIqBVoEhISkJOTg3HjxtUKQsePH0d5eTn69eun93vZ2to2umYyLv369YNGo8GBAwd02uPi4jBu3DiUlZXh+PHjOq/t378fXbt2hZOTU73fRyaTwdbWFpaWlk1St7Fr3749Hn/88VqP8PDwu65XWlpaZ3tVVRUqKioaVVNJSUmj1qfGY9ghg5g4cSJyc3OxY8cObVtFRQXWrVuHSZMm1blOSUkJXnrpJSiVSsjlcnTo0AEff/wxhBA6y6lUKsydOxfu7u5wcnLC6NGjcf369Tq3mZqaiieeeAKenp6Qy+Xo3Lkzli9f3qDP1K9fP2RmZiIxMVHbFhcXB4VCgaeeekobfG59rWa9Glu3bkX//v3h4OAAJycnjBw5EmfPntV5n7rG7JSVleGFF16Am5ub9jOnpqbecZxSfn4+pk2bBhcXFzg7O2P69Ok6/7nLZDKUlJRg1apV2m7/+oyRUKvVeOONN+Dl5QUHBweMHj0aKSkpOsvs27cPjz32GNq0aQO5XA6lUom5c+fW6q3KyMjA9OnT4efnB7lcDm9vb4wZMwZXrlzRWa4++wwANmzYgJCQENja2iIkJATr16+/5+e51eeff47OnTtDLpfDx8cHs2bNQn5+vs4yAwcOREhICM6dO4fo6GjY29vD19cXH3744T23X3Mc3BqYy8vLceLECTzyyCMIDAzUeS07OxsXL17UOX7qczzfaczO2rVr0alTJ539c7cxTcuWLUNQUBDkcjl69eqFo0ePal+bNm0aPvvsMwC6p5JqaDQaLFy4EJ07d4atrS08PT3x9NNPIy8vT+c9hBB499134efnB3t7e0RHR9f5s22smp/b8ePHMWDAANjb2+ONN97QGaO0cOFC7ec9d+4cAGDXrl3aY8/FxQVjxozB+fPndbZd8/t67tw5TJo0Ca1atdL7DxxqelZSF0AtQ0BAAPr06YOffvoJw4cPB1D9pVVQUIAJEybg008/1VleCIHRo0dj9+7dmDFjBkJDQ/HHH39g3rx5SE1N1Rkr8+STT+L777/HpEmTEBkZiV27dmHkyJG1asjMzETv3r0hk8nw/PPPw93dHVu3bsWMGTNQWFiIOXPm6PWZav4D279/P4KDgwFUf3H17t0bERERsLa2xoEDBzB69Gjta05OTtpxGt999x2mTp2KoUOH4oMPPkBpaSm++OIL9OvXDydPnrzrQNpp06ZhzZo1mDx5Mnr37o09e/bU+ZlrjBs3Dm3btkVMTAxOnDiBr7/+Gh4eHvjggw+0tTz55JMIDw/HU089BQAICgq65z547733IJPJ8OqrryIrKwsLFy7EoEGDEB8fDzs7OwDVX6qlpaV49tln0bp1axw5cgSLFy/G9evXsXbtWu22xo4di7Nnz2L27NkICAhAVlYWduzYgWvXrmn3RX332fbt2zF27Fh06tQJMTExyM3N1Qap+njrrbfw9ttvY9CgQXj22WeRkJCAL774AkePHkVcXBysra21y+bl5WHYsGF45JFHMG7cOKxbtw6vvvoqunTpoj3W6xIYGAgfHx+dHsCjR4+ioqICkZGRiIyMRFxcHF566SUA0PYA1Rx3jTmeN2/ejPHjx6NLly6IiYlBXl4eZsyYccfxZD/++COKiorw9NNPQyaT4cMPP8QjjzyCpKQkWFtb4+mnn0ZaWhp27NiB7777rtb6Tz/9NFauXInp06fjhRdeQHJyMpYsWYKTJ0/q7M///Oc/ePfddzFixAiMGDECJ06cwJAhQ/TqWSkvL69zQL5CoYCNjY32eW5uLoYPH44JEybg8ccfh6enp/a1FStWoLy8HE899RTkcjlcXV2xc+dODB8+HIGBgXjrrbdQVlaGxYsXo2/fvjhx4kSt39fHHnsM7dq1w/vvv1/rDzSSgCBqRitWrBAAxNGjR8WSJUuEk5OTKC0tFUII8dhjj4no6GghhBD+/v5i5MiR2vU2bNggAIh3331XZ3uPPvqokMlkIjExUQghRHx8vAAgnnvuOZ3lJk2aJACI+fPna9tmzJghvL29RU5Ojs6yEyZMEM7Oztq6kpOTBQCxYsWKu362wsJCYWlpKWbMmKFt69Chg3j77beFEEKEh4eLefPmaV9zd3cXgwcPFkIIUVRUJFxcXMTMmTN1tpmRkSGcnZ112ufPny9u/VU9fvy4ACDmzJmjs+60adNqfeaadZ944gmdZR9++GHRunVrnTYHBwcxderUu37mGrt37xYAhK+vrygsLNS2r1mzRgAQixYt0rbV7NdbxcTECJlMJq5evSqEECIvL08AEB999NEd31OffRYaGiq8vb1Ffn6+tm379u0CgPD397/rZ8vKyhI2NjZiyJAhQq1Wa9uXLFkiAIjly5dr26KiogQA8e2332rbVCqV8PLyEmPHjr3r+whR/TtgZ2cnKioqhBDV+6Vt27ZCCCE+//xz4eHhoV325ZdfFgBEamqqEKJxx3OXLl2En5+fKCoq0rbFxsbW2j8167Zu3VrcuHFD275x40YBQPz+++/atlmzZom6vlL27dsnAIgffvhBp33btm067TX7feTIkUKj0WiXe+ONNwSAeh2bAO74+Omnn7TL1fzcli5dqrN+zedVKBQiKytL57XQ0FDh4eEhcnNztW2nTp0SFhYWYsqUKdq2mt+5iRMn3rNeMhyexiKDqRmHsGnTJhQVFWHTpk13PIW1ZcsWWFpa4oUXXtBpf+mllyCEwNatW7XLAai13O1/1Qoh8Msvv2DUqFEQQiAnJ0f7GDp0KAoKCnDixAm9Po+TkxO6du2q/cs8JycHCQkJiIyMBAD07dtXexri4sWLyM7O1v5VvmPHDuTn52PixIk6tVhaWiIiIgK7d+++4/tu27YNAPDcc8/ptM+ePfuO69w+E6V///7Izc1FYWGhXp/5dlOmTNEZP/Loo4/C29tb+3MBoO3hAapPTebk5CAyMhJCCJw8eVK7jI2NDWJjY2ud2qhR332Wnp6O+Ph4TJ06Fc7Oztr1Bw8ejE6dOt3zM+3cuRMVFRWYM2cOLCz+/i9y5syZUCgU2Lx5s87yjo6OePzxx7XPbWxsEB4ejqSkpHu+V79+/XTG5sTFxekcP1lZWbh06ZL2tbZt28LHx6dRx3NaWhpOnz6NKVOmwNHRUdseFRWFLl261LnO+PHj0apVK+3z/v37A0C9PuPatWvh7OyMwYMH69QZFhYGR0dH7c+tZr/Pnj1b5xSYvj2uY8aMwY4dO2o9oqOjdZaTy+WYPn16ndsYO3Ys3N3dtc9rjqlp06bB1dVV2961a1cMHjxY53ivwdlfxoWnschg3N3dMWjQIPz4448oLS2FWq3Go48+WueyV69ehY+PT62BmPfdd5/29Zp/LSwsap1y6dChg87z7Oxs5OfnY9myZVi2bFmd75mVlaX3Z+rXrx8WL16MnJwcHDhwAJaWlujduzcAIDIyEp9//jlUKlWt8To1X2D3339/ndtVKBR3fM+az9y2bVud9ppTaXVp06aNzvOaL668vLy7vte9tGvXTue5TCZDcHCwzjiba9eu4T//+Q9+++23WkGmZiaSXC7HBx98gJdeegmenp7o3bs3HnzwQUyZMkU7eL2++6zm2Li9NqD6uLhXqK1Z//ZjyMbGBoGBgdrXa/j5+dUaU9WqVSv89ddfd30fQHfcTkREBA4cOIB3330XABASEgKFQoG4uDgolUocP34c48ePB9C447mm/rqOl+Dg4Dr3z92On3u5dOkSCgoK4OHhcdc67/Rzc3d31wla9+Ln54dBgwbdczlfX1+d01q3uv13607HBFD9f9Iff/yBkpISODg43HEbJC2GHTKoSZMmYebMmcjIyMDw4cPh4uJikPfVaDQAgMcffxxTp06tc5muXbvqvd2asBMXF4cDBw6gS5cu2r+WIyMjoVKpcPToUezfvx9WVlbaIFRTz3fffVfnTDQrq6b91bzTTBzRzGMJ1Go1Bg8ejBs3buDVV19Fx44d4eDggNTUVEybNk27H4Dqv+BHjRqFDRs24I8//sC///1vxMTEYNeuXejevbvB91l9NWbfduvWDU5OTti/fz9GjBiBGzduaHt2LCwsEBERgf379yMoKAgVFRXacNRcx/OdNOYzajQaeHh44Icffqjz9Vt7UAzp1h5HfV5riu2T4THskEE9/PDDePrpp3Ho0CH8/PPPd1zO398fO3fuRFFRkU7vzoULF7Sv1/yr0Whw+fJlnb+6EhISdLZXM1NLrVbX66+++rp1kPLBgwd1rqHj4+MDf39/xMXFIS4uDt27d4e9vT2Avwf/enh46F1PzWdOTk7W+Sv41llhDdGQqzTX9LbUEEIgMTFR+0V7+vRpXLx4EatWrcKUKVO0y906K+9WQUFBeOmll/DSSy/h0qVLCA0Nxf/+9z98//339d5nNcfG7bUBtY+Lu62fkJCAwMBAbXtFRQWSk5Ob9Pip6QmMi4vD/v37oVAodE4lRUZG4ueff9b2wtQcb405nms+X13HS2OOoTsdP0FBQdi5cyf69u171wBw68/t1v2enZ1drx6k5nTrMXG7CxcuwM3NTadXh4wPx+yQQTk6OuKLL77AW2+9hVGjRt1xuREjRkCtVmPJkiU67f/3f/8HmUymneVS8+/ts7kWLlyo89zS0hJjx47FL7/8gjNnztR6v+zs7IZ8HPj4+KBt27b4888/cezYMe1f5TUiIyOxYcMGJCQk6Ew/HTp0KBQKBd5///1aF467Vz1Dhw4FUD01+laLFy9u0Geo4eDgUGtq9b18++23KCoq0j5ft24d0tPTtT+Xmh6BW3sAhBBYtGiRznZKS0tRXl6u0xYUFAQnJyeoVCoA9d9n3t7eCA0NxapVq3Qu2Ldjxw7tFOK7GTRoEGxsbPDpp5/q1P3NN9+goKDgrrPeGqJfv37Izs7GihUrEBERoTNOKDIyEgkJCdi4cSNat26tPY3bmOPZx8cHISEh+Pbbb1FcXKxt37NnD06fPt3gz1HzZX/7MTRu3Dio1Wq88847tdapqqrSLj9o0CBYW1tj8eLFOvv99t9lKdx6TN36+c6cOYPt27djxIgR0hVH9cKeHTK4O3W732rUqFGIjo7Gm2++iStXrqBbt27Yvn07Nm7ciDlz5mj/yg8NDcXEiRPx+eefo6CgAJGRkfjzzz/r/At1wYIF2L17NyIiIjBz5kx06tQJN27cwIkTJ7Bz507cuHGjQZ+nX79+2um2t18dOTIyEj/99JN2uRoKhQJffPEFJk+ejB49emDChAlwd3fHtWvXsHnzZvTt27dW0KsRFhaGsWPHYuHChcjNzdVOPb948SKAhvXQ1Gx3586d+OSTT7QhLiIi4q7ruLq6ol+/fpg+fToyMzOxcOFCBAcHY+bMmQCAjh07IigoCC+//DJSU1OhUCjwyy+/1PpL/eLFi3jggQcwbtw4dOrUCVZWVli/fj0yMzMxYcIEvfdZTEwMRo4ciX79+uGJJ57AjRs3sHjxYnTu3FnnC74u7u7ueP311/H2229j2LBhGD16NBISEvD555+jV69eOoORm0LNcXHw4MFa10iqmVp+6NAhjBo1Sudn25jj+f3338eYMWPQt29fTJ8+HXl5eViyZAlCQkLuuX/uJCwsDED1ZIGhQ4fC0tISEyZMQFRUFJ5++mnExMQgPj4eQ4YMgbW1NS5duoS1a9di0aJFePTRR+Hu7o6XX34ZMTExePDBBzFixAicPHkSW7duhZubW73ruHjxIr7//vta7Z6enhg8eHCDPhsAfPTRRxg+fDj69OmDGTNmaKeeOzs7S36bDKoHKaaAUctx69Tzu7l96rkQ1VON586dK3x8fIS1tbVo166d+Oijj3SmpQohRFlZmXjhhRdE69athYODgxg1apRISUmpNQ1bCCEyMzPFrFmzhFKpFNbW1sLLy0s88MADYtmyZdpl6jv1vMaXX36pnYZ9uxMnTminvmZmZtZ6fffu3WLo0KHC2dlZ2NraiqCgIDFt2jRx7Ngx7TK3Tz0XQoiSkhIxa9Ys4erqKhwdHcVDDz0kEhISBACxYMGCWutmZ2frrF/zc0lOTta2XbhwQQwYMEDY2dndc6pvzdTzn376Sbz++uvCw8ND2NnZiZEjR2qnk9c4d+6cGDRokHB0dBRubm5i5syZ4tSpUzr7OCcnR8yaNUt07NhRODg4CGdnZxERESHWrFnToH0mhBC//PKLuO+++4RcLhedOnUSv/76q5g6deo9p57XWLJkiejYsaOwtrYWnp6e4tlnnxV5eXk6y0RFRYnOnTvXWlef9ykpKRFWVlYCgNi+fXut17t27SoAiA8++KDWa405nlevXi06duwo5HK5CAkJEb/99psYO3as6NixY61167okwO2/X1VVVWL27NnC3d1dyGSyWsfssmXLRFhYmLCzsxNOTk6iS5cu4pVXXhFpaWnaZdRqtXj77beFt7e3sLOzEwMHDhRnzpwR/v7+jZ56HhUVpV3uTj+3u31eIYTYuXOn6Nu3r7CzsxMKhUKMGjVKnDt3TmeZO/3OkbRkQvBqR0TmID4+Ht27d8f333+Pf/zjH1KXQyYoNDQU7u7udxxTRWSqOGaHyATVdWPQhQsXwsLCAgMGDJCgIjIllZWVte5TFxsbi1OnTmHgwIHSFEXUjDhmh8gEffjhhzh+/Diio6NhZWWFrVu3YuvWrXjqqaegVCqlLo+MXGpqKgYNGoTHH38cPj4+uHDhApYuXQovLy9eDI/MEk9jEZmgHTt24O2338a5c+dQXFyMNm3aYPLkyXjzzTclu94MmY6CggI89dRTiIuLQ3Z2NhwcHPDAAw9gwYIF9bonGpGpYdghIiIis8YxO0RERGTWGHaIiIjIrLX4k/sajQZpaWlwcnJq8MXYiIiIyLCEECgqKoKPj4/Olcfr0uLDTlpaGmevEBERmaiUlBT4+fnddZkWH3ZqbjKZkpIChULRLO8hhEDof3dArRHY9VIUPBS2zfI+RERELUVhYSGUSqXOzaLvpMWHnZpTVwqFotnCDgC0buWMnOIKVFjYNuv7EBERtST1GYLCAcoG4uYoBwBkFZXfY0kiIiJqSgw7BlJz6iqrSCVxJURERC0Lw46BeDhV9+xkM+wQEREZFMOOgdSEnaxCnsYiIiIyJIYdA9GGHfbsEBERGRTDjoFwzA4REZE0GHYM5O+eHZ7GIiIiMiSGHQPxcLrZs1OoAm80T0REZDgMOwbioaju2VFVaVBYViVxNURERC0Hw46B2FpbwtXBBgCQml8mcTVEREQtB8OOAfm62AEArueVSlwJERFRy8GwY0A1YYc9O0RERIbDsGNAfq1uhp08hh0iIiJDYdgxIN9W7NkhIiIyNIYdA+JpLCIiIsNj2DEgX57GIiIiMjiGHQNq42oPAMgtqUBReaXE1RAREbUMDDsG5GRrDfebt41Iyi6RuBoiIqKWgWHHwALdHAAASTnFEldCRETUMjDsGFige3XYSWbPDhERkUEw7BhYoJsjAOByDsMOERGRITDsGFhNzw7H7BARERkGw46BBbpX9+wk5xRDoxESV0NERGT+GHYMzK+VHawsZCiv1CC9sFzqcoiIiMwew46BWVtaoE3r6uvtJGVzRhYREVFzY9iRQHsPJwBAQkaRxJUQERGZP4YdCdznrQAAnEsvlLgSIiIi88ewI4FOPjfDThrDDhERUXNj2JFATdhJzCqGqkotcTVERETmjWFHAj7OtnC2s0aVRuBSJgcpExERNSeTDzsxMTHo1asXnJyc4OHhgYceeggJCQlSl3VXMpkMnW6O2znPcTtERETNyuTDzp49ezBr1iwcOnQIO3bsQGVlJYYMGYKSEuO+QrF23A7DDhERUbOykrqAxtq2bZvO85UrV8LDwwPHjx/HgAEDJKrq3jrfDDt/XS+QuBIiIiLzZvJh53YFBdXhwdXVtc7XVSoVVCqV9nlhoTQ9K93btAIAnE4tQEWVBjZWJt/JRkREZJTM6htWo9Fgzpw56Nu3L0JCQupcJiYmBs7OztqHUqk0cJXVAlrbw9XBBhVVGpxNY+8OERFRczGrsDNr1iycOXMGq1evvuMyr7/+OgoKCrSPlJQUA1b4N5lMhu5KFwDAiWv5ktRARETUEphN2Hn++eexadMm7N69G35+fndcTi6XQ6FQ6Dyk0sO/+lTWiWt5ktVARERk7kx+zI4QArNnz8b69esRGxuLtm3bSl1SvXVv4wIAOHmVYYeIiKi5mHzYmTVrFn788Uds3LgRTk5OyMjIAAA4OzvDzs5O4ururpufCywtZEgrKEfKjVIoXe2lLomIiMjsmPxprC+++AIFBQUYOHAgvL29tY+ff/5Z6tLuyUFuhW5+zgCAg0m5EldDRERknky+Z0cIIXUJjRIZ5IYT1/Jx8HIuxvWUZmYYERGROTP5nh1TFxnUGgBw4HKOyQc3IiIiY8SwI7Ee/q1gY2WBzEIVknKM+xYXREREpohhR2K21pYIu3k15X0XsyWuhoiIyPww7BiBgR3cAQC7Ehh2iIiImhrDjhF44D5PAMChy7koUVVJXA0REZF5YdgxAkHuDvBvbY8KtQb7LuVIXQ4REZFZYdgxAjKZDPd39AAA7LqQKXE1RERE5oVhx0gMunkqa9eFbGg0nIJORETUVBh2jESvAFc4ya2QU6zCyZR8qcshIiIyGww7RsLGygKDOlX37vx+Kk3iaoiIiMwHw44RGdXNGwCw+XQ61DyVRURE1CQYdoxIv2B3ONtZI7tIhcPJvDEoERFRU2DYMSI2VhYYHuIFgKeyiIiImgrDjpEZ3c0HALDldAZUVWqJqyEiIjJ9DDtGJiKwNbwUtigoq8SOc7zmDhERUWMx7BgZSwsZHuvpBwD4+WiKxNUQERGZPoYdIzSupxIAsO9SDlJulEpcDRERkWlj2DFCSld79G/nBgBYe4y9O0RERI3BsGOkxveq7t1ZfTQFFVUaiashIiIyXQw7RmpIJy94OMmRVaTCpr84DZ2IiKihGHaMlI2VBaZGBgAAvtmfDCF4RWUiIqKGYNgxYpPC28DW2gJn0wpxOPmG1OUQERGZJIYdI9bKwQaP9Kiehv7N/mSJqyEiIjJNDDtG7om+bQEAO89n4lJmkcTVEBERmR6GHSMX7OGIIZ08IQSweFei1OUQERGZHIYdE/DCA+0AAL//lYbErGKJqyEiIjItDDsmIMTXGYO1vTuXpC6HiIjIpDDsmIgXa3p3TqUhMYtjd4iIiOqLYcdE1PTuaATwwbYEqcshIiIyGQw7JuTVYR1gaSHDjnOZOJyUK3U5REREJoFhx4QEezhhws17Zr2/5Tw0Gl5VmYiI6F4YdkzMnEHt4WBjiVPXC/A775lFRER0Tww7JsbdSY5nooIAAO9tPo+i8kqJKyIiIjJuDDsmaOaAQAS0tkdWkQr/235R6nKIiIiMGsOOCbK1tsQ7D4UAAL49eAVnUgskroiIiMh4MeyYqP7t3DGqmw80AnhzwxmoOViZiIioTgw7JuzfI++Dk9wKp1LysfLAFanLISIiMkoMOybMQ2GL10Z0BAB8uO0CLmfzvllERES3Y9gxcZPC26B/OzeoqjR4ac0pVKk1UpdERERkVBh2TJxMJsOHj3aFk60V4lPy8eXeJKlLIiIiMioMO2bA29kOb43qDABYuPMi4lPypS2IiIjIiDDsmIlHevhiZBdvVKoFnv/xBApKebFBIiIigGHHbMhkMsSM7YI2rva4nleGeetOQQhORyciImLYMSMKW2t8NqkHbCwtsP1cJqejExERgWHH7HTxc8abI+8DUH1n9GNXbkhcERERkbQYdszQlD7+2vE7z3x/HKn5ZVKXREREJBmGHTMkk8nw0WNd0clbgZziCsxcdQylFVVSl0VERCQJhh0zZW9jha+m9kRrBxucSy/EvLV/ccAyERG1SAw7ZszXxQ5fTg6DtaUMm0+n44NtCVKXREREZHAMO2auZ4ArYh7pCgBYuucyVsQlS1wRERGRYTHstACPhvlh3tAOAID/bjqH30+lSVwRERGR4TDstBDPDQzC1D7+EAL455p4xCXmSF0SERGRQTDstBAymQz/GdVZOyX9qW+P4fhVXoOHiIjMH8NOC2JpIcMn47uhX7AbSirUmLr8KE5ey5O6LCIiombFsNPCyK0s8dWUnugd6IpiVRWmLD+Cv67nS10WERFRs2HYaYHsbCzxzdRe6BXQCkXlVZj8zRGcSS2QuiwiIqJmwbDTQjnIrbBiejh6tHFBQVklJi47hKO8jxYREZkhhp0WzFFuhVVPhCO8rSuKVFWY/M1h7LmYLXVZRERETYphp4VzsrXGqunhGNjBHeWVGjy56ii2nE6XuiwiIqImw7BDsLOxxLLJPfFg1+pp6c//eAI/HbkmdVlERERNgmGHAAA2VhZYNKE7JoYroRHA67+exgfbLkCj4c1DiYjItDHskJalhQzvP9wFcwa1AwB8EXsZL6w+ifJKtcSVERERNRzDDumQyWSYM6g9Pn6sG6wsZNj0Vzoe//owbpRUSF0aERFRg5h82Nm7dy9GjRoFHx8fyGQybNiwQeqSzMKjYX749olwONla4djVPIz5bD/OpRVKXRYREZHeTD7slJSUoFu3bvjss8+kLsXsRAa74ddnI9HG1R4pN8ow9osD2PQX75hORESmRSaEMJsRqDKZDOvXr8dDDz1U73UKCwvh7OyMgoICKBSK5ivOhOWXVmD2Tyex71L1ndKfHRiEl4d0gKWFTOLKiIiopdLn+9vke3b0pVKpUFhYqPOgu3Oxt8GKab3w9IBAANUDl59YeRR5HMdDREQmoMWFnZiYGDg7O2sfSqVS6pJMgpWlBV4fcR8WTQiFrbUF9lzMxohP9+EYbzFBRERGrsWFnddffx0FBQXaR0pKitQlmZQxob749dm+aOvmgPSCcoxfdgifxybyejxERGS0WlzYkcvlUCgUOg/STycfBX6f3Q9jQn2g1gh8uC0B01ceRW6xSurSiIiIamlxYYeahqPcCgvHh+KDsV0gt6o+rTV80T7eSJSIiIyOyYed4uJixMfHIz4+HgCQnJyM+Ph4XLvGezs1N5lMhvG92uC35/sh2MMRWUUqTF1+BP/ZeAZlFbzqMhERGQeTn3oeGxuL6OjoWu1Tp07FypUr77k+p543jbIKNRZsPY9VB68CAALdHPC/cd3QvU0riSsjIiJzpM/3t8mHncZi2Gla+y5lY97av5BRWA5LCxlmDQzC7AfawdrS5DsRiYjIiPA6OySZ/u3c8cecARjdrXrw8qe7EjF6SRz+up4vdWlERNRCMexQk3O2t8anE7tj8cTuaGVvjfPphXjoszi8v+U8x/IQEZHBMexQsxnVzQc7/hmF0d18oBHAsr1JGLpwLw4k5khdGhERtSAMO9Ss3Bzl+HRid3wztSe8nW1x7UYpJn19GK+sO4X8Ut5ugoiImh/DDhnEA/d5YvvcAZjc2x8AsObYdUR/HIvVR67x6stERNSsGHbIYJxsrfHOQyFY+0wftPd0RF5pJV779TQe+eIATl8vkLo8IiIyU5x6zqnnkqhUa7DqwBUs3HkJxaoqyGTApPA2mDe0A1zsbaQuj4iIjBynnpPRs7a0wJP9A7HrpSg8FOoDIYAfDl9D9Mex+OHwVVSpNVKXSEREZoI9O+zZMQqHknIxf+NZJGQWAQDaezrijRH3YWAHD4krIyIiY8QrKOuBYcd4VKo1+P7QVSz68xLySysBAP3bueHNkfehoxd/NkRE9DeGHT0w7BifgtJKLNl9CSsPXEGlWsBCBozvpcTcwe3h4WQrdXlERGQEGHb0wLBjvK7mluCDbRew5XQGAMDBxhIzBwTiyf6BcJRbSVwdERFJiWFHDww7xu/olRt4d9M5nLo5Pd3VwQbPDQzC4739YWttKXF1REQkBYYdPTDsmAaNRmDLmXR8sv0iknJKAADezrZ44YF2eDTMj3dVJyJqYRh29MCwY1qq1Br8cuI6Fu68hPSCcgBAWzcHzB3cHg928YaFhUziComIyBAYdvTAsGOayivV+OHwNXy2OxE3SqrvsdXRywmz72+H4SFeDD1ERGaOYUcPDDumrVhVheX7k/HV3iQUqaoAAO08HDH7gXYY2cUblgw9RERmiWFHDww75qGgtBLL45KxPC4ZReXVoSfQ3QGz7w/GqK4+sOKYHiIis8KwoweGHfNSWF6JVXFX8PX+ZBSUVV+YsK2bA2ZFB+OhUIYeIiJzwbCjB4Yd81RUXolvD17F1/uSkHfzasx+rewws38gxvVUws6GU9aJiEwZw44eGHbMW4mqCt8fuople5OQe3Mgs6uDDab2CcCUPv5o5cA7rBMRmSKGHT0w7LQMZRVqrDuegmX7kpByowwAYGdtiQnhSjzZPxC+LnYSV0hERPpg2NEDw07LUqXWYMuZDCyNvYxz6YUAACsLGUZ388HTUUHo4OUkcYVERFQfDDt6YNhpmYQQ2HcpB0v3XMaBy7na9gHt3fFE3wAMaOfOa/UQERkxhh09MOzQX9fz8eWeJGw9kw7Nzd+GIHcHTO/bFmN7+HEwMxGREWLY0QPDDtW4lluKVQev4OejKSi+eYFCZztrTIpogyl9/OHtzHE9RETGgmFHDww7dLui8kqsPXYdKw9cwbUbpQAASwsZRnTxxhN9A9C9TSuJKyQiIoYdPTDs0J2oNQJ/ns/E8rhkHEq6oW0PVbpgSh9/jOjiDVtrnuIiIpICw44eGHaoPs6mFWD5/iv4/VQaKtQaAEAre2uM66XE4xH+ULraS1whEVHLwrCjB4Yd0kd2kQprjqXgh0NXkVZQDgCQyYDoDh6Y3NsfA9q78+ajREQGwLCjB4YdaogqtQa7LmThu0NXse9SjrZd6WqHf0T4Y1xPJVx5dWYiombDsKMHhh1qrOScEvxw6CrWHEtB4c07rttYWeDBrt6YFN4GYf6tIJOxt4eIqCkx7OiBYYeaSlmFGr+fSsO3h67gTGqhtj3YwxETeinxSA8/9vYQETURhh09MOxQUxNCID4lHz8duYbfT6WjrFINALCxtMCQzp6YGN4GfQJb8wrNRESNwLCjB4Ydak5F5ZX47VQaVh9JwenUAm17G1d7jO+lxGNhfvBQ2EpYIRGRaWLY0QPDDhnKmdQCrD56DRtPpqHo5hWaLS1kiO7ggYnhSkS1d4eVpYXEVRIRmQaGHT0w7JChlVWosfl0OlYfuYZjV/O07V4KWzzcwxdje/gh2MNRwgqJiIwfw44eGHZISpcyi7D6aAp+PXEdeaWV2vbubVzwaJgfHuzqA2c7awkrJCIyTgw7emDYIWOgqlJj1/ksrDt+HbEXs6G+eft1GysLDO3shUfD/NAv2I0XLCQiuolhRw8MO2RssorKsfFkGtYeT8HFzGJtu6dCjkd6+PE0FxERGHb0wrBDxkoIgTOphVh3PAUbT6Uh/5bTXKHK6tNco7r6wNmep7mIqOVh2NEDww6ZAlWVGrsvVJ/m2p2ge5rrgY4eeKi7LwZ2cIfcindhJ6KWgWFHDww7ZGqyi1TYGJ+KtceuIyGzSNvubGeNEV288XB3X/T0b8WLFhKRWWPY0QPDDpkqIQTOpxdhQ3wqNsanIrNQpX3N18UOY0J98HB3X7TzdJKwSiKi5sGwoweGHTIHao3A4aRcrD+Ziq1nMlB886KFANDJW4GHu/tidKgPPHm1ZiIyEww7emDYIXNTXqnGzvOZ2HAyDbEJWai6Ob5HJgP6BrlhTKgPhoV4wcmWA5uJyHQx7OiBYYfM2Y2SCmw+nY4NJ1Nx/JarNcutLDCokydGd/NBVHt32FpzYDMRmRaGHT0w7FBLcS23FBvjU7E+PhVJ2SXadie5FYZ09sKobt7oG+wGa96fi4hMAMOOHhh2qKURQuB0agF+i0/Dpr/SkVFYrn3N1cEGw0O8MKqbD8IDXDmji4iMFsOOHhh2qCXTaASOXc3D76fSsOV0OnJLKrSveSrkGNnFB6NDfdDNzxkyGYMPERkPhh09MOwQVatSa3AwKRe/n0rD1jMZKCr/e0aX0tUOo7r6YFQ3H3T0cmLwISLJMezogWGHqDZVlRp7L+bg91Np2HEuE2WVau1rwR6OGN3NBw929UagO+/RRUTSYNjRA8MO0d2VVlRh14Us/H4qDbsTslFRpdG+FuKrwINdfTCyizeUrvYSVklELQ3Djh4Ydojqr7C8EtvPZuL3U2nYn5ijvUcXAHT1c8aILt4MPkRkEAw7emDYIWqYGyUV2HomHVtOp+Pg5VzcknsYfIio2THs6IFhh6jxcopV+ONsBjb/lY5DSQw+RNT8GHb0wLBD1LQYfIjIEBh29MCwQ9R8copV2HYmA1tO1x18RnbxxggGHyJqAIYdPTDsEBkGgw8RNSWGHT0w7BAZ3t2CT7ebp7oYfIjobiQNO2VlZbCzs2vKTTYrhh0iad2rx4djfIioLpKGnbCwMBw/flyn7cKFC+jYsWNTvk2TYdghMh7ZRX8Pbj6crBt8uvjW9Ph4wb+1g3RFEpFRkCTs/P777zh37hyWL1+OnTt3QqlUal/r1q0bTp061RRv0+QYdoiMU03wqavHp7OPQtvjE+DG4EPUEunz/W3VVG8aEhKClJQU5OTkYOrUqbh69Sp8fX3h7e0Na2vrpnqbO/rss8/w0UcfISMjA926dcPixYsRHh7e7O9LRM3D3UmOx3v74/He/sgtVuGPs5nVFzBMysXZtEKcTSvER38k4D5vBUZ28cKILrxXFxHVrdE9O4mJiQgODtY+37t3LwYMGAAASE1NxdWrVxESEtKsvSY///wzpkyZgqVLlyIiIgILFy7E2rVrkZCQAA8Pj7uuy54dItNyo6RC2+Nz4HKuzi0rOno5YWQXbwzv4o1gDwYfInNm0NNYcrkcQ4YMwZw5c/DAAw80ZlMNFhERgV69emHJkiUAAI1GA6VSidmzZ+O1116767oMO0SmK6+kAtvPZWDz6QwcSMxB1S3Bp4OnU/Wprq5eCPZwkrBKImoOBg07KSkp+PLLL/H111/Dzc0NL774IiZPngxbW9vGbLbeKioqYG9vj3Xr1uGhhx7Stk+dOhX5+fnYuHGjzvIqlQoqlUr7vLCwEEqlkmGHyMTll1Zg+7nqU137L+kGn3YejjeDjzfaezL4EJkDfcKOhb4b/+mnn3SeK5VKvPvuu0hJScEbb7yBVatWwc/PD6+//jpSUlL03bzecnJyoFar4enpqdPu6emJjIyMWsvHxMTA2dlZ+7h1IDURmS4XexuM66nEyunhOP6vwfjo0a64v6MHrC1luJRVjEV/XsKQ/9uLQZ/swSc7LuJCRiFa+GXGiFqMevfsZGRk4LnnnoOLiwuWL1+uba+oqEB+fj7y8vKQl5eHGzduYPfu3Vi6dCkqKip0elGaQ1paGnx9fXHgwAH06dNH2/7KK69gz549OHz4sM7y7NkhalkKyiqx82aPz75LOahQa7SvBbo7aK/c3NHLCTKZTMJKiUgfzTIba9myZaisrNQJOgBga2sLR0dHuLm5QaFQQKFQwNnZGaNHj4azs3PDPoEe3NzcYGlpiczMTJ32zMxMeHl51VpeLpdDLpc3e11EZByc7awxNswPY8P8UFheiT/PZ2LzXxnYezEbSdklWLwrEYt3JSLQzQHDb87q6uStYPAhMiP17tnJz8/Hiy++iOLiYvzyyy/a9gkTJmDHjh2YPHkyXnjhBQQGBjZbsXcSERGB8PBwLF68GED1AOU2bdrg+eef5wBlIqpTUXkl/jyfhc2n07HnYjYqqv7u8Qloba+9ZUVnHwYfImPUrAOUt2zZghEjRui0Xb9+HUuWLME333yDvn37Ys6cORg4cKDehTfUzz//jKlTp+LLL79EeHg4Fi5ciDVr1uDChQu1xvLcjmGHiIrKK7HrQha2nE5HbEI2VLcEH//W9hgeUn0BwxBfBh8iYyHZ7SJKS0uxatUqLFq0CLa2tpgzZw6mTZvWVJu/qyVLlmgvKhgaGopPP/0UERER91yPYYeIblWsqsKuC1nYejoduxOyUF75d/BRutphREh1j09XP2cGHyIJGTTsLFmyBEVFRTqP/Px87Nq1CyUlJVCr1Y3ZfLNj2CGiOylRVWF3QnWPz64LusHHr5UdRnTxxvAQL4QqXRh8iAzMoGGnT58+cHFxueNj/Pjxjdl8s2PYIaL6KK2oQmxCNjafTseu81koq/z7DzlfFzsMD/HCiK7e6M7gQ2QQkt713NQw7BCRvsoq1IhNyMKWMxn483wmSiv+Dj4+zrYYFlJ95ebuylawsGDwIWoODDt6YNghosYor1QjNiEbW06n48/zmSi5Jfh4KWwxvIsXRnbxRo82DD5ETYlhRw8MO0TUVMor1dh7sTr47DyfhWJVlfY1T4Ucw28Obu7pz+BD1FgMO3pg2CGi5lBeqca+SznYejodO85louiW4OPhJK8e49PFGz0DXGHJ4EOkN4YdPTDsEFFzU1Wpsf9SDjbXBJ/yv4OPu5McwzpXB5/wtgw+RPXFsKMHhh0iMiRVlRoHEnOx+XQ6tp/NQOEtwcfN0QZDO1eP8Qlv6worS73v1UzUYjDs6IFhh4ikUlGlQdzlHGz5Kx3bz2WioKxS+1prBxsMDfHCg128ERHYmj0+RLdh2NEDww4RGYNKtQYHLudiy1/p+ONcBvJL/w4+Hk5yPNjVB6NDfdCNV24mAsCwoxeGHSIyNpVqDQ4l5WLzX+nYeiZDp8fHv7U9RnfzwZhQHwR7OElYJZG0GHb0wLBDRMasokqDvRezsfFUGnaey9S5cvN93gqMCfXBqG4+8HWxk7BKIsNj2NEDww4RmYoSVRV2ns/Eb/Fp2HMxG1Wav//77unfCmNCfTCiizdaO8olrJLIMBh29MCwQ0SmKK+kAlvPZOC3U6k4nHwDNf+TW1rI0C/YDaO7+WBIZ0842VpLWyhRM2HY0QPDDhGZuoyCcmz6Kw0b49NwOrVA2y63ssAD93lgdDcfRHf0gNzKUsIqiZoWw44eGHaIyJwkZRfjt1Np+O1UGpKyS7TtClsrPNjNB49090WYfyvO6CKTx7CjB4YdIjJHQgicTSusDj7xacgoLNe+5t/aHg+F+uKRHr7wb+0gYZVEDcewoweGHSIyd2qNwKGkXPxy4jq2nclA6S13Zg/zb4WHu/viwa7ecLG3kbBKIv0w7OiBYYeIWpLSiir8cTYDv55IRVxiDmomdNlYWuD+jh54pIcvBnbwgI0Vb1VBxo1hRw8MO0TUUmUWlmNjfCp+PZGKCxlF2vZW9tZ4sKsPHunhi1ClC8f3kFFi2NEDww4REXAurRDrT17Hhvg0ZBeptO2B7g54LEyJsT184aGwlbBCIl0MO3pg2CEi+luVWoO4y7lYf+I6tp3NQHmlBkD19XuiO7jjsZ5K3N/RA9a8IztJjGFHDww7RER1K1ZVYfNfaVhz7DqOX83Ttrs52uDh7r54rKcS7T15fy6SBsOOHhh2iIjuLTGrGGuPp+CX46nIKf77NFeo0gXjeirxYDdvKHi1ZjIghh09MOwQEdVfpVqDPQnZWHMsBbsuZGnvz2VrbYERId54rKcSEW1dYWHBQc3UvBh29MCwQ0TUMNlFKmw4mYqfj6UgMatY297WzQETw5V4NEwJVwdeu4eaB8OOHhh2iIgaRwiB+JR8rDl2Hb+fSkOxqgpA9bV7hoV4YVJEG0S0deUUdmpSDDt6YNghImo6Jaoq/H4qDT8euYa/rv99U9IgdwdMDG+DR8P8eKVmahIMO3pg2CEiah5nUgvww+Fr2Bifqr1FhY2VBR7s4o1JEW14Q1JqFIYdPTDsEBE1r6LySmyMT8OPh6/hXHqhtr2DpxP+0bsNHunhB0e5lYQVkili2NEDww4RkWEIIXDqegF+OHQVv/+Vpr1goaPcCo+G+WFqZADauvEu7FQ/DDt6YNghIjK8grJKrD9xHd8euoqk7BJte1R7d0yLDEBUe3dOX6e7YtjRA8MOEZF0NBqB/Yk5WHXgCnYlZKHmGymgtT0m9wnAYz39eLFCqhPDjh4YdoiIjMPV3BJ8d/Aqfj6WgqLy6unr9jaWeKSHL6ZFBiDYg7emoL8x7OiBYYeIyLiUqKqw/mQqVh24gku3XKxwYAd3PNU/EH2CWnMWFzHs6INhh4jIOAkhcPByLlYeuIId5zO1p7g6+ygws38gRnb15t3XWzCGHT0w7BARGb+ruSX4Zn8y1h67jrLK6mv2eDvb4om+bTE+XMlxPS0Qw44eGHaIiExHXkkFfjh8FSsPXNXefd1RboWJ4UpM79sWPi52EldIhsKwoweGHSIi01NeqcbG+FR8tS9ZexNSKwsZRnXzwXMDg9DOk4OZzR3Djh4YdoiITJdGI7DnYjaW7U3CwaRcAIBMBgzr7IVZ0cEI8XWWuEJqLgw7emDYISIyD39dz8dnuxPxx9lMbdvADu54PjoYPQNcJayMmgPDjh4YdoiIzEtCRhE+j03E76fSoLn5Ddc70BXPR7dD32BOWzcXDDt6YNghIjJPV3JKsHTPZfxy4joq1dVfdd2ULpgzqB0Gtndn6DFxDDt6YNghIjJvafllWLY3CT8duQZVVfXNR8P8W+Glwe0RGewmcXXUUAw7emDYISJqGbKLVFi29zK+PXhVG3p6B7ripSEd0ItjekwOw44eGHaIiFqWzMJyfL47ET8dSUGFujr0DGjvjn8Obo9QpYu0xVG9MezogWGHiKhlSs0vw5JdiVh7LAVVN0cyD7rPE68O68Dr9JgAhh09MOwQEbVs13JL8emuS/j1xHVoBGAhA8b3UmLuoPbwUNhKXR7dAcOOHhh2iIgIABKzivHRHxe01+mxs7bEzAGBeGpAIBzlVhJXR7dj2NEDww4REd3q2JUbeH/LeZy4lg8AcHO0wYuD2mNCLyXvsm5EGHb0wLBDRES3E0Jg25kMfLDtAq7klgIAAt0d8O8HOyG6g4fE1RHAsKMXhh0iIrqTSrUGPx25hkU7LyG3pAIA8EBHD/z7wU4IcHOQuLqWjWFHDww7RER0L0XllVi8KxHL9yejSiNgY2mBJ/q1xfP3B3M8j0QYdvTAsENERPWVmFWM/246h70XswEAHk5yvD6iIx4K9eXtJwyMYUcPDDtERKQPIQT+PJ+F/246h2s3qsfzhLd1xfsPhyDYg9fnMRR9vr85rJyIiEgPMpkMgzp5YvvcAZg3tAPsrC1xJPkGhi/ah092XER5pVrqEuk2DDtEREQNYGttiVnRwdg+dwCiO7ijUi3w6Z+XMGLRPhy4nCN1eXQLhh0iIqJGULraY/m0Xvj8Hz3g4SRHUk4JJn11GC+tOYW8mzO4SFoMO0RERI0kk8kwoos3dr4Uhcm9/SGTAb+cuI7B/7cXO85lSl1ei8ewQ0RE1EQUttZ456EQ/PJsJII9HJFTrMLMb4/hnz/Ho6C0UuryWiyGHSIioibWo00rbJrdD09HBcJCBvx6MhVDFu7Brgvs5ZECww4REVEzsLW2xOvD78PaZyIR6OaAzEIVnlh5DK+u+wslqiqpy2tRGHaIiIiaUZh/K2x5sT+e7NcWMhnw87EUjFq8H2dSC6QurcVg2CEiImpmttaW+NeDnfDTzN7wdrZFUk4JHv48Dl/tTYJG06Kv7WsQJh923nvvPURGRsLe3h4uLi5Sl0NERHRHvQNbY+uL/TG0sycq1QLvbTmPqSuOIKuoXOrSzJrJh52Kigo89thjePbZZ6UuhYiI6J5c7G2w9PEwvPdwCGytLbDvUg5GLNqPw0m5Updmtkw+7Lz99tuYO3cuunTpInUpRERE9SKTyfCPCH/8/nw/dPB0Qk6xCpO+Poyv9iahhd+yslmYfNjRl0qlQmFhoc6DiIhICu08nbB+ViQe7u4Ltab6tNasH0+gmLO1mlSLCzsxMTFwdnbWPpRKpdQlERFRC2ZvY4VPxnXDf8d0hrWlDFtOZ2D0kv24lFkkdWlmwyjDzmuvvQaZTHbXx4ULFxq07ddffx0FBQXaR0pKShNXT0REpB+ZTIYpfQLw89N94KWwRVJ2CR7+/ABiE7KkLs0syIQRnhzMzs5Gbu7dB2oFBgbCxsZG+3zlypWYM2cO8vPz9XqvwsJCODs7o6CgAAqFoiHlEhERNZmcYhWe++EEjiTfgIUM+M+DnTA1MgAymUzq0oyKPt/fVgaqSS/u7u5wd3eXugwiIiKDc3OU4/sZEfjXhtNYc+w63vr9HBKzizF/VGdYWxrlCRmjZ/J77dq1a4iPj8e1a9egVqsRHx+P+Ph4FBcXS10aERFRg9hYWeCDsV3x+vCOkMmA7w9dw/QVR1FYzpuJNoRRnsbSx7Rp07Bq1apa7bt378bAgQPvuT5PYxERkTHbfjYDL66OR1mlGvd5K7DqiV7wcLKVuizJ6fP9bfJhp7EYdoiIyNidSS3AtBVHkFNcgTau9vhuRjj8WztIXZak9Pn+NvnTWEREROYuxNcZ656JhNLVDtdulGLsFwdxNo03Eq0vhh0iIiITEODmgF+eicR93grkFKsw4ctDOHblhtRlmQSGHSIiIhPhobDF6qd6I7ytK4pUVZiy/AiOJDPw3AvDDhERkQlxtrPGqunh6BfshtIKNaatOIJDvInoXTHsEBERmRg7G0t8PbUn+rerDjzTVxzFgcs5UpdltBh2iIiITJCttSW+mtITUe3dUVapxhMrj+L4VZ7SqgvDDhERkYmytbbEl5PDENXeHeWVGkxfcRTn0wulLsvoMOwQERGZMFtrSyx9PAw9/VuhsLwKk785gis5JVKXZVQYdoiIiEycnY0lvpnWSzst/fFvDiOjoFzqsowGww4REZEZcLazxrdPhCOgtT2u55Vh+sqjKFZVSV2WUWDYISIiMhPuTnJ8NyMCbo5ynE8vxAs/nYRa06LvCgWAYYeIiMisKF3t8dWUMMitLLDrQhbe3XxO6pIkx7BDRERkZrq3aYVPxoUCAFbEXcF3B69IWo/UGHaIiIjM0Miu3pg3tAMA4O3fz+FoC76PFsMOERGRmXpuYBBGdfNBlUZg1g8nkFXUMmdoMewQERGZKZlMhgWPdEE7D0dkFanw/I8nUanWSF2WwTHsEBERmTEHuRWWTg6Do9wKR5Jv4IOtF6QuyeAYdoiIiMxckLsjPn6sKwDg6/3J2H0hS+KKDIthh4iIqAUYFuKNaZEBAIB5604hp1glbUEGxLBDRETUQrw2vCM6eDohp7gCr6z7C0K0jAsOMuwQERG1ELbWllg0MRQ2Ny84+P3ha1KXZBAMO0RERC1IRy8FXh3WEQAQs+U8rueVSlxR82PYISIiamGmRwagV0ArlFao8eb6M2Z/Oothh4iIqIWxsJBhwdiusLGywJ6L2dgQnyp1Sc2KYYeIiKgFCnJ3xIsPtAMA/Pf3c2Y9O4thh4iIqIV6akAg7vNWIK+0Eh9uM9+LDTLsEBERtVDWlhZ496EQAMCaY9cRn5IvbUHNhGGHiIioBQvzb4VHevgCAN767Sw0GvMbrMywQ0RE1MK9NqwjHGwsEZ+Sj19Pmt9gZYYdIiKiFs5DYYvZNwcr/297Asor1RJX1LQYdoiIiAjTIgPg7WyL9IJyfH/oqtTlNCmGHSIiIoKttSXmDmoPAPhsdyIKyyslrqjpMOwQERERAOCRHr4IcndAXmklvt6bJHU5TYZhh4iIiAAAVpYWmDe0AwDgm/3JyC+tkLiipsGwQ0RERFpDO3vhPm8FSirUWHXAPMbuMOwQERGRlkwmw7MDgwAAKw8ko7SiSuKKGo9hh4iIiHSMCPGCf2t75JVW4qcjKVKX02gMO0RERKTDytICz0RV9+58tTcJqirTvu4Oww4RERHV8kgPX3gq5MgoLMfW0xlSl9MoDDtERERUi9zKEo9H+AMAVh64Im0xjcSwQ0RERHWaGNEGNpYWiE/JxykTviM6ww4RERHVyc1RjpFdvQEAqw5ekbaYRmDYISIiojuaGhkAANh0Kt1kLzLIsENERER3FKp0wX3eClSoNfjtVJrU5TQIww4RERHd1WNhfgCAdcevS1xJwzDsEBER0V091N0X1pYy/HW9AAkZRVKXozeGHSIiIrorVwcbPNDREwCw9pjpXVGZYYeIiIju6dGbp7J+O5UGjUZIXI1+GHaIiIjonga0d4fC1gpZRSocu5ondTl6YdghIiKie7KxssDgTl4AgC2n0yWuRj8MO0RERFQvI7v+HXZM6VQWww4RERHVS79gdziZ4Kkshh0iIiKql+pTWdWzskzpVBbDDhEREdXbkJvjdmITsiSupP4YdoiIiKje+ga3hrWlDFdyS3Elp0TqcuqFYYeIiIjqzcnWGj39XQGYTu8Oww4RERHpZWAHdwBA7MVsiSupH4YdIiIi0svADh4AgIOXc1FeqZa4mntj2CEiIiK9tPd0hLezLVRVGhxKypW6nHti2CEiIiK9yGQyRLWvPpV14DLDDhEREZmhiMDqQcqHk29IXMm9MewQERGR3iLatgYAnEktQLGqSuJq7s6kw86VK1cwY8YMtG3bFnZ2dggKCsL8+fNRUVEhdWlERERmzcfFDn6t7KDWCBw38ltHWEldQGNcuHABGo0GX375JYKDg3HmzBnMnDkTJSUl+Pjjj6Uuj4iIyKxFtG2N63nXcSQ5VzuGxxiZdNgZNmwYhg0bpn0eGBiIhIQEfPHFFww7REREzSwi0BW/nLiOI0Y+bsekT2PVpaCgAK6urlKXQUREZPZ6tGkFADidWoAqtUbiau7MrMJOYmIiFi9ejKeffvqOy6hUKhQWFuo8iIiISH+Bbg5wkluhvFKDi5nFUpdzR0YZdl577TXIZLK7Pi5cuKCzTmpqKoYNG4bHHnsMM2fOvOO2Y2Ji4OzsrH0olcrm/jhERERmycJChi5+zgCAv67nS1vMXciEEELqIm6XnZ2N3Ny7X6QoMDAQNjY2AIC0tDQMHDgQvXv3xsqVK2FhcecMp1KpoFKptM8LCwuhVCpRUFAAhULRNB+AiIiohfhg2wV8EXsZE8OViHmkq8Het7CwEM7OzvX6/jbKAcru7u5wd6/fqO7U1FRER0cjLCwMK1asuGvQAQC5XA65XN4UZRIREbV43W727MSnFEhcyZ0ZZdipr9TUVAwcOBD+/v74+OOPkZ39991Xvby8JKyMiIioZeimdAEAXMwsQlmFGnY2ltIWVAeTDjs7duxAYmIiEhMT4efnp/OaEZ6dIyIiMjteClu4OcqRU6xCQmYRQm+GH2NilAOU62vatGkQQtT5ICIiouYnk8nQ0csJAJCQYZwznE067BAREZH02nvWhB3jnH7OsENERESNou3ZyWTPDhEREZmhDtrTWEUSV1I3hh0iIiJqlHaejpDJgJziCuQUq+69goEx7BAREVGj2NtYoY2rPQDgohH27jDsEBERUaPVDFK+wLBDRERE5qidhyMAICnH+GZkMewQERFRowW4OQAAruSUSlxJbQw7RERE1Ghtb4ad5JwSiSupjWGHiIiIGs2/dfUA5bSCMpRXqiWuRhfDDhERETWau6McDjaWEAJIuWFcp7IYdoiIiKjRZDLZ3+N2chl2iIiIyAwFtK4OO1dzjWvcDsMOERERNQnfVnYAgLT8cokr0cWwQ0RERE3C16U67KTm8zQWERERmaG/w06ZxJXoYtghIiKiJlFzGis1j2GHiIiIzFBN2MkrrURpRZXE1fyNYYeIiIiahMLWGk62VgCMq3eHYYeIiIiajDGO22HYISIioibjqbAFAGQVqiSu5G8MO0RERNRkPJzkAIDsYoYdIiIiMkMeiuqwk1VoPBcWZNghIiKiJuPueDPsFLFnh4iIiMyQx80xO9kMO0RERGSOasbssGeHiIiIzJKH083ZWEXlEEJIXE01hh0iIiJqMu43e3bKKzUoUhnHVZQZdoiIiKjJ2NlYwklefRVlY7nWDsMOERERNanWjjYAgLzSCokrqWYldQFSqzmfWFhYKHElRERE5sFBVgmNqhTXM3PRwbV5okbN93Z9xgXJhLGMHpLI9evXoVQqpS6DiIiIGiAlJQV+fn53XabFhx2NRoO0tDQ4OTlBJpM16bYLCwuhVCqRkpIChULRpNumv3E/Gwb3s+FwXxsG97NhNNd+FkKgqKgIPj4+sLC4+6icFn8ay8LC4p6JsLEUCgV/kQyA+9kwuJ8Nh/vaMLifDaM59rOzs3O9luMAZSIiIjJrDDtERERk1hh2mpFcLsf8+fMhl8ulLsWscT8bBvez4XBfGwb3s2EYw35u8QOUiYiIyLyxZ4eIiIjMGsMOERERmTWGHSIiIjJrDDtERERk1hh2Gumzzz5DQEAAbG1tERERgSNHjtx1+bVr16Jjx46wtbVFly5dsGXLFgNVatr02c9fffUV+vfvj1atWqFVq1YYNGjQPX8uVE3f47nG6tWrIZPJ8NBDDzVvgWZE332dn5+PWbNmwdvbG3K5HO3bt+f/H/Wg735euHAhOnToADs7OyiVSsydOxfl5eUGqtY07d27F6NGjYKPjw9kMhk2bNhwz3ViY2PRo0cPyOVyBAcHY+XKlc1bpKAGW716tbCxsRHLly8XZ8+eFTNnzhQuLi4iMzOzzuXj4uKEpaWl+PDDD8W5c+fEv/71L2FtbS1Onz5t4MpNi777edKkSeKzzz4TJ0+eFOfPnxfTpk0Tzs7O4vr16wau3LTou59rJCcnC19fX9G/f38xZswYwxRr4vTd1yqVSvTs2VOMGDFC7N+/XyQnJ4vY2FgRHx9v4MpNi777+YcffhByuVz88MMPIjk5Wfzxxx/C29tbzJ0718CVm5YtW7aIN998U/z6668CgFi/fv1dl09KShL29vbin//8pzh37pxYvHixsLS0FNu2bWu2Ghl2GiE8PFzMmjVL+1ytVgsfHx8RExNT5/Ljxo0TI0eO1GmLiIgQTz/9dLPWaer03c+3q6qqEk5OTmLVqlXNVaJZaMh+rqqqEpGRkeLrr78WU6dOZdipJ3339RdffCECAwNFRUWFoUo0C/ru51mzZon7779fp+2f//yn6Nu3b7PWaU7qE3ZeeeUV0blzZ5228ePHi6FDhzZbXTyN1UAVFRU4fvw4Bg0apG2zsLDAoEGDcPDgwTrXOXjwoM7yADB06NA7Lk8N28+3Ky0tRWVlJVxdXZurTJPX0P383//+Fx4eHpgxY4YhyjQLDdnXv/32G/r06YNZs2bB09MTISEheP/996FWqw1VtslpyH6OjIzE8ePHtae6kpKSsGXLFowYMcIgNbcUUnwXtvgbgTZUTk4O1Go1PD09ddo9PT1x4cKFOtfJyMioc/mMjIxmq9PUNWQ/3+7VV1+Fj49PrV8u+ltD9vP+/fvxzTffID4+3gAVmo+G7OukpCTs2rUL//jHP7BlyxYkJibiueeeQ2VlJebPn2+Isk1OQ/bzpEmTkJOTg379+kEIgaqqKjzzzDN44403DFFyi3Gn78LCwkKUlZXBzs6uyd+TPTtk1hYsWIDVq1dj/fr1sLW1lbocs1FUVITJkyfjq6++gpubm9TlmD2NRgMPDw8sW7YMYWFhGD9+PN58800sXbpU6tLMSmxsLN5//318/vnnOHHiBH799Vds3rwZ77zzjtSlUSOxZ6eB3NzcYGlpiczMTJ32zMxMeHl51bmOl5eXXstTw/ZzjY8//hgLFizAzp070bVr1+Ys0+Tpu58vX76MK1euYNSoUdo2jUYDALCyskJCQgKCgoKat2gT1ZBj2tvbG9bW1rC0tNS23XfffcjIyEBFRQVsbGyatWZT1JD9/O9//xuTJ0/Gk08+CQDo0qULSkpK8NRTT+HNN9+EhQX7B5rCnb4LFQpFs/TqAOzZaTAbGxuEhYXhzz//1LZpNBr8+eef6NOnT53r9OnTR2d5ANixY8cdl6eG7WcA+PDDD/HOO+9g27Zt6NmzpyFKNWn67ueOHTvi9OnTiI+P1z5Gjx6N6OhoxMfHQ6lUGrJ8k9KQY7pv375ITEzUBkoAuHjxIry9vRl07qAh+7m0tLRWoKkJmIK3kWwyknwXNtvQ5xZg9erVQi6Xi5UrV4pz586Jp556Sri4uIiMjAwhhBCTJ08Wr732mnb5uLg4YWVlJT7++GNx/vx5MX/+fE49rwd99/OCBQuEjY2NWLdunUhPT9c+ioqKpPoIJkHf/Xw7zsaqP3339bVr14STk5N4/vnnRUJCgti0aZPw8PAQ7777rlQfwSTou5/nz58vnJycxE8//SSSkpLE9u3bRVBQkBg3bpxUH8EkFBUViZMnT4qTJ08KAOKTTz4RJ0+eFFevXhVCCPHaa6+JyZMna5evmXo+b948cf78efHZZ59x6rmxW7x4sWjTpo2wsbER4eHh4tChQ9rXoqKixNSpU3WWX7NmjWjfvr2wsbERnTt3Fps3bzZwxaZJn/3s7+8vANR6zJ8/3/CFmxh9j+dbMezoR999feDAARERESHkcrkIDAwU7733nqiqqjJw1aZHn/1cWVkp3nrrLREUFCRsbW2FUqkUzz33nMjLyzN84SZk9+7ddf6fW7Nvp06dKqKiomqtExoaKmxsbERgYKBYsWJFs9YoE4J9c0RERGS+OGaHiIiIzBrDDhEREZk1hh0iIiIyaww7REREZNYYdoiIiMisMewQERGRWWPYISIiIrPGsENEBhEbGwuZTIb8/Px6r/PWW28hNDS02Wq63cCBAzFnzhyDvR8RGQbDDhHpWLp0KZycnFBVVaVtKy4uhrW1NQYOHKizbE2AuXz58j23GxkZifT0dDg7OzdpvYYMKCtXroRMJqv1sLW1Ncj7E1HD8K7nRKQjOjoaxcXFOHbsGHr37g0A2LdvH7y8vHD48GGUl5drv9x3796NNm3a1OsO5zY2Nve8U70pUCgUSEhI0GmTyWR3XL6uu5ILIaBWq2Flpd9/wQ1dj6ilY88OEeno0KEDvL29ERsbq22LjY3FmDFj0LZtWxw6dEinPTo6GkD1HaVjYmLQtm1b2NnZoVu3bli3bp3Osrefxvrqq6+gVCphb2+Phx9+GJ988glcXFxq1fTdd98hICAAzs7OmDBhAoqKigAA06ZNw549e7Bo0SJtL8uVK1cAAGfOnMHw4cPh6OgIT09PTJ48GTk5OdptlpSUYMqUKXB0dIS3tzf+97//1Wv/yGQyeHl56Tw8PT21rw8cOBDPP/885syZAzc3NwwdOlT72bdu3YqwsDDI5XLs378fKpUKL7zwAjw8PGBra4t+/frh6NGjtfbZ7esRkX4YdoiolujoaOzevVv7fPfu3Rg4cCCioqK07WVlZTh8+LA27MTExODbb7/F0qVLcfbsWcydOxePP/449uzZU+d7xMXF4ZlnnsGLL76I+Ph4DB48GO+9916t5S5fvowNGzZg06ZN2LRpE/bs2YMFCxYAABYtWoQ+ffpg5syZSE9PR3p6OpRKJfLz83H//feje/fuOHbsGLZt24bMzEyMGzdOu9158+Zhz5492LhxI7Zv347Y2FicOHGiSfbfqlWrYGNjg7i4OCxdulTb/tprr2HBggU4f/48unbtildeeQW//PILVq1ahRMnTiA4OBhDhw7FjRs3dLZ3+3pEpKdmvc0oEZmkr776Sjg4OIjKykpRWFgorKysRFZWlvjxxx/FgAEDhBBC/PnnnwKAuHr1qigvLxf29vbiwIEDOtuZMWOGmDhxohDi7zsj19xBevz48WLkyJE6y//jH/8Qzs7O2ufz588X9vb2orCwUNs2b948ERERoX0eFRUlXnzxRZ3tvPPOO2LIkCE6bSkpKQKASEhIEEVFRcLGxkasWbNG+3pubq6ws7Orta1brVixQgAQDg4OOo9hw4bp1NO9e3ed9Wo++4YNG7RtxcXFwtraWvzwww/atoqKCuHj4yM+/PDDO65HRPrjiV8iqmXgwIEoKSnB0aNHkZeXh/bt28Pd3R1RUVGYPn06ysvLERsbi8DAQLRp0wZnz55FaWkpBg8erLOdiooKdO/evc73SEhIwMMPP6zTFh4ejk2bNum0BQQEwMnJSfvc29sbWVlZd63/1KlT2L17NxwdHWu9dvnyZZSVlaGiogIRERHadldXV3To0OGu2wUAJyenWj1AdnZ2Os/DwsLqXLdnz546dVRWVqJv377aNmtra4SHh+P8+fN3XI+I9MewQ0S1BAcHw8/PD7t370ZeXh6ioqIAAD4+PlAqlThw4AB2796N+++/H0D1bC0A2Lx5M3x9fXW2JZfLG1WLtbW1znOZTAaNRnPXdYqLizFq1Ch88MEHtV7z9vZGYmJig+uxsLBAcHDwXZdxcHDQq/1eGroeEVXjmB0iqlN0dDRiY2MRGxurM+V8wIAB2Lp1K44cOaIdr9OpUyfI5XJcu3YNwcHBOg+lUlnn9jt06KAzGBdAref1YWNjA7VardPWo0cPnD17FgEBAbXqcXBwQFBQEKytrXH48GHtOnl5ebh48aLe799QQUFB2nE9NSorK3H06FF06tTJYHUQtQTs2SGiOkVHR2PWrFmorKzU9uwAQFRUFJ5//nlUVFRow46TkxNefvllzJ07FxqNBv369UNBQQHi4uKgUCgwderUWtufPXs2BgwYgE8++QSjRo3Crl27sHXr1rtO465LQEAADh8+jCtXrsDR0RGurq6YNWsWvvrqK0ycOBGvvPIKXF1dkZiYiNWrV+Prr7+Go6MjZsyYgXnz5qF169bw8PDAm2++CQuLe//9J4RARkZGrXYPD496rV/DwcEBzz77LObNmwdXV1e0adMGH374IUpLSzFjxgy99gER3R3DDhHVKTo6GmVlZejYsaPO1OqoqCgUFRVpp6jXeOedd+Du7o6YmBgkJSXBxcUFPXr0wBtvvFHn9vv27YulS5fi7bffxr/+9S8MHToUc+fOxZIlS/Sq8+WXX8bUqVPRqVMnlJWVITk5GQEBAYiLi8Orr76KIUOGQKVSwd/fH8OGDdMGko8++kh7usvJyQkvvfQSCgoK7vl+hYWFOp+7Rnp6ut7XEVqwYAE0Gg0mT56MoqIi9OzZE3/88QdatWql13aI6O5kQgghdRFERAAwc+ZMXLhwAfv27ZO6FCIyI+zZISLJfPzxxxg8eDAcHBywdetWrFq1Cp9//rnUZRGRmWHPDhFJZty4cYiNjUVRURECAwMxe/ZsPPPMM1KXRURmhmGHiIiIzBqnnhMREZFZY9ghIiIis8awQ0RERGaNYYeIiIjMGsMOERERmTWGHSIiIjJrDDtERERk1hh2iIiIyKwx7BAREZFZ+3+o87q6fgs+xgAAAABJRU5ErkJggg==",
"text/plain": [
"