Free Novel Read

The Ten-Day MBA 4th Ed. Page 17


  Let’s apply these new pieces of probability theory to finance. The monthly stock returns of a volatile stock, Pioneer Aviation, are assumed normally distributed as shown by a plotted graph. A summary of historical returns shows a mean (center) of 1 percent and an SD (dispersion) of 11 percent. Gerald Rasmussen wanted to know what was the probability that next month’s return would be less than 13 percent.

  THE PROBABILITY DENSITY FUNCTION

  Monthly Stock Returns of Pioneer Aviation

  Using our new Z value tool we can figure it out:

  The normal distribution table I have provided in the appendix tells us that 1.09 SDs = .3621. The entire left side of the graph equals .5000, as any complete half of the distribution would. This holds true in all situations. There is a 50 percent chance of being above or below the center or mean in any normal distribution. Combining those pieces of information, I calculate there is an .8621 (.3621 + .50) probability that there will be a return of less than 13 percent, and conversely a .1379 chance that it will be greater (1 − .8621). This is a real-world answer to a real-world business problem using statistics as our tools.

  THE PROBABILITY DENSITY FUNCTION

  Monthly Stock Returns of Pioneer Aviation

  Statistics is not difficult if you do not dwell too long on theory. Other distributions exist, but are rarely used in business. The Poison distribution (pronounced “pwasaun”) is similar to the normal distribution but has a flaring tail on the right side of the graph. But most distributions are assumed to be normal to take advantage of the normal distribution’s laws of standard deviations.

  CUMULATIVE DISTRIBUTION FUNCTIONS

  A cumulative distribution function (CDF) is a cumulative view of a probability distribution. It takes a probability mass function, such as a bell curve, and asks, “What is the probability that the outcome is less than or equal to that value?” The normal curve tells you what the probability is for a given outcome, but the CDF tells you the probability for a given range of values. The CDF can also be used to marry our knowledge of uncertainty (probability theory) to our decision-making tool (decision trees). A CDF captures the range of possible outcomes of many-valued uncertain quantities.

  To continue our oil well example, let’s take the distribution of the possible values of oil that may be in the ground if oil is recovered:

  In the tree constructed before, we used $1,000,000 as our payoff. That amount was the expected value (EMV) of the oil, because I conveniently chose it for the example. The distribution was actually a wide range of values. There was a .005 chance of a $6,000,000 payday and .005 chance of $50,000 as shown by the table of values. If you multiply each of the dollar values by their individual probabilities in the second column, the EMV equals $1,000,000, the EMV we used before.

  Constructing a cumulative distribution function allows decision makers to arrive at the mean or EMV when they are not certain what it is to begin with. Drawing a CDF is a method of combining a series of your judgments about the probability of the upper, middle, and lower bounds of an unknown outcome to arrive at an EMV to use for decision making.

  The CDF graph of ranges of outcomes resembles a big S. In the CDF, you see at a glance all the possible outcomes, not just static individual points. As shown by the following graph, Sam Houston believes that all his possible outcomes fall in the continuous “range” of $0 to $6,000,000.

  The range of probabilities from 0 to 1.0 in the CDF is divided into fractiles, or slices, using the bracket median technique. The CDF above is divided in that way. To divide the CDF probability ranges into five fractiles, for example, one would take the .1, .3, .5, .7, and .9 fractiles. Each of those fractiles would represent the average of the “ranges of values,” 0 to .2, .2 to .4, .4 to .6, .6 to .8, and .8 to 1.0, respectively.

  The .5 fractile is the same as the median, because half of the values are on either side of it. The median is not necessarily the same as the mean we used as the center of the normal distribution. The median is merely the center of the value range. The mean is the result of multiplying all the probabilities by the values, as was done to arrive at the $1,000,000 EMV for an oil discovery.

  To marry this CDF concept to the decision tree to make important management decisions, imagine how you would represent all the values an oil well may produce. It would be a range of values that would be represented by a fan of possibilities. One could not possibly draw the infinite possibilities of branches on the tree, so we use a CDF to help out.

  Drawing a CDF. To draw a CDF as shown below, you use your own judgment and your research data. You need to ask yourself a series of questions:

  What value would occur where results are either higher or lower 50 percent of the time (the median)?

  What value would be at the low end (.10 fractile)?

  What value would be at the high end (.90 fractile)?

  CUMULATIVE DENSITY FUNCTION

  Values of Possible Oil Drilling Outcomes

  (in thousands of $s)

  With the answers to these questions, you can draw the CDF of what you believe the range of outcomes is. By picking five outcomes using the five fractiles from the CDF, you can draw the event fan of five possibilities and probabilities on a decision tree as five branches.

  The expected monetary value is the same as in our first go-around, but that is only because I conveniently used the correct EMV to begin with.

  A shortcut to using five fractiles is called the Pearson Tukey Method. Instead of five fractiles, the method uses only three—the .05, .5, and .95 fractiles. Their respective probabilities are .185, .63, .185.

  For large problems the decision tree has been computerized by Monte Carlo simulation programs, the most popular being Oracle’s Crystal Ball. The tree and the parameters of the “event fan” CDFs are included in the computer model. The program runs many simulations to give you an idea of how things may really turn out. Some Fortune 500 companies use it. Financial planners use it to evaluate long-term returns on investment portfolios.

  DECISION TREE OF OIL DRILLING USING THE CUMULATIVE DISTRIBUTION FUNCTION

  CDF and fractile analysis can be used for situations where the EMV of a branch of a decision tree is uncertain. However, the judgment of the analyst is most important. The tree is simply a tool that the MBA must use in tandem with his or her knowledge and intuition.

  REGRESSION ANALYSIS AND FORECASTING

  Linear regression models are used in a variety of business situations to determine relationships between variables that analysts believe intuitively to be related. Once a relationship is established, it can be used to forecast the future. Commonly regression analysis is used to relate sales to price, promotions, and market factors; stock prices to earnings and interest rates; and production costs to production volumes. But of course one can use it as well to find answers to questions such as “What is the effect of temperature on the sales of ice cream cones?” The independent variable (X) in this scenario is the temperature. It is the variable that is believed to cause other things to happen. The dependent variable (Y) is sales. Temperature affects sales, not vice versa.

  Regression analysis involves gathering sufficient data to determine the relationship between the variables. With many data points, such as year’s worth of information on temperature and sales changes, a graph can be drawn with temperature along the X axis and sales along the Y axis. The goal of regression is to produce an equation of a line that “best” depicts this relationship. Regression tries to “fit” a line between the plotted data points so that the “squared differences between the points and the line are least.” The least squares method requires a great deal of adding, subtracting, and multiplying. A business calculator or computer spreadsheet program will be necessary.

  AN ALGEBRA REFRESHER COURSE

  To set the stage for a regression example, let’s review some basic algebra. You recall that a line is described by this algebraic formula:

  Y = mX + b

  Y = dependent variable (such as sales)


  m = the slope of the line (the relationship between variables)

  X = the independent variable (such as rain)

  b = y axis intercept (where the line crosses the vertical axis)

  The computer spreadsheet will calculate the linear equation (Y = mX + b) that defines the relationship between the independent and dependent variables. The program will determine whether the line that it has calculated as the best “fit” can be used as an accurate tool for forecasting.

  AN ICE CREAM REGRESSION EXAMPLE

  The owner of a chain of twenty Ben & Jerry’s ice cream shops noticed that as the temperature rose and fell, so did his sales. In an attempt to determine the precise mathematical relationship between sales and seasonal temperatures, he gathered the monthly sales data for the previous five years and the average temperatures for the months in question from the National Weather Service. His data looked as follows:

  Using the “Regression” function of the spreadsheet, the owner generated the following Excel output:

  WHAT DOES THIS MEAN?

  Wondrously enough, that block of information contains the equation for the line that describes the relationship between temperature and sales at Ben & Jerry’s. First let’s interpret the data in the output to get the line equation.

  “Coefficient of the Y Intercept” = b = −379,066

  “Coefficient of the X Variable” = m = 16,431

  Placing that information into a standard linear equation as described in the algebra refresher, Y = 16,431X − 379,066, plotting the data points, and drawing the regression line described by the equation, the graph looks like this:

  SALES OF BEN & JERRY’S ICE CREAM REGRESSION EXAMPLE

  As shown by the graph, the regression line runs through the middle of the data points. By plugging temperature, X, into the equation, the predicted ice cream sales can be calculated. In Ben & Jerry’s case, a temperature of 60 degrees would result in predicted monthly sales of $606,794.

  Y = (16,431 × 60 degrees) − 379,066 = $606,794

  But just how accurate is this equation in predicting the sales of ice cream? The answer to this question is given to us by another number in the “Regression Output.”

  R SQUARE EXPLAINED

  The R Square value tells us “what percent of the variation in the data is explained by the regression equation given.” In our case 70.4 percent of the variation of sales is explained by the regression equation. This is considered very high. In broader economic analyses, an R Square of 30 percent would be considered very high, since there are thousands of variables that could affect economies. In the ice cream business, one could imagine that in addition to temperature, store advertising, couponing, and store hours could also explain sales fluctuations.

  But be careful! Do not read too much into the regression data results! Regression only says that changes in sales occur with changes in temperature in much the same way as described. It does not say that “temperature actually caused sales to move.” But if a selected independent variable is reasonable and it is a good predictor of the desired dependent variable under study, use it.

  Regression analysis points not only to positive correlations, such as ice cream sales and temperature, but also to negative correlations, such as interest rates and housing sales. If interest rates are high, housing sales are low. In this case the X coefficient is a negative number. These negative relationships are just as useful predictors as positive/positive relationships.

  STANDARD ERROR EXPLAINED

  The “Standard Error of the Y Estimate and the X Coefficient” shown in the spreadsheet output are synonyms for the standard deviations of the Y and X coefficients of the regression line. In the Ben & Jerry’s example, the standard error of the Y estimate (sales) is plus or minus $243,334 68 percent of the time. (It is listed in the Excel output found above.) In the same way the output shows that the standard error of the X coefficient (temperature) is 3,367. (It is listed in the Excel output found above.) A variety of analyses about the ranges of possible data values can be performed using standard deviations to show the variability of those numbers and the reliability of the resulting regression equation.

  THE T STATISTIC MEASURE OF RELIABILITY

  The T statistic can help determine if the regression equation calculated by the spreadsheet is a good one to use for forecasting. The T statistic reveals if an X variable has a statistically significant effect on Y, such as temperature’s effect on sales. You calculate the measure by dividing the X coefficient by its “Standard Error.” The rule of thumb says if a T statistic is above 2 or below −2, the X variable has a statistically significant effect on Y. In our case 16,431/3,367 = 4.88, a very high T-stat. It is also listed in the Excel output on the top of page 192. Therefore, an analyst would conclude that temperature is a good predictor of sales.

  When considering whether a model is a good forecaster, it is necessary to have both a high R Square and a high T statistic. It is possible to create a model with more than one X variable. This is called multivariable regression. As the number of variables increases, so does the R Square. However, adding more X variables with low T statistics creates an inaccurate model. It is necessary to play with the model, adding and dropping independent variables to achieve high R Squares and high T statistics.

  DUMMY VARIABLE REGRESSION ANALYSIS

  A trick employed in regression analysis is the use of dummy variables to represent conditions that are not measured in a numerical series. Ones and zeroes are used to represent these conditions. For example, at Toys “R” Us, having the “hot” toy of the season in stock, a nonnumerical condition, catapults sales. The in-stock condition could be indicated in a data set by a 1, and out-of-stock condition could be designated by a 0, using dummy variables.

  Given a hypothetical set of data at a Toys “R” Us store, you can see how it works.

  “HOT” TOY STOCK STATUS

  DATE: 12/1/05

  (1 = in, 0 = out): 0

  Sales: $100,000

  DATE: 12/2/05

  (1 = in, 0 = out): 0

  Sales: $100,000

  DATE: 12/3/05

  (1 = in, 0 = out): 1

  Sales: $200,000

  DATE: 12/4/05

  (1 = in, 0 = out): 1

  Sales: $200,000

  DATE: 12/5/05

  (1 = in, 0 = out): 0

  Sales: $100,000

  DATE: 12/6/05

  (1 = in, 0 = out): 1

  Sales: $200,000

  DATE: 12/7/05

  (1 = in, 0 = out): 0

  Sales: $100,000

  The following regression output of the relationship between “hot” toys and sales is the result.

  This is a perfect model because the variation explained by the model’s R Square is 100 percent and the T statistic is excellent. The T statistic is very large. The sales are $100,000 without the desired toys, and an extra $100,000 when they’re in stock. The regression equation, using the spreadsheet output, is

  Sales = $100,000X + $100,000

  If the coveted toys are in stock, X = 1 and sales jump to $200,000. If not, X = 0, and sales total $100,000. Dummy variables are useful and can be used to match nonscaled data, such as stock status or a holiday, with other regularly scaled data, such as temperature, interest rates, and product defects, to produce useful regression models.

  OTHER FORECASTING TECHNIQUES

  Time series techniques forecast outcomes based on changes in a relationship over time. In our ice cream example, the data points of temperature and sales were charted on the graph without regard to when they occurred. The regression relationship did not consider time. Obviously seasons affect Ben & Jerry’s sales. Time series analysis considers time by plotting data as it occurs. The technique then attempts to “decompose” the fluctuations within the data into three parts:

  The Underlying Trend—Up, down, flat (a long-term measure)

  The Cycles—Hourly, daily, weekly, monthly (a short-term pattern)

  Unexplained Mo
vements—Unusual or irregular movements caused by unique events and quirks of nature

  Regression and moving averages are used to determine the trend and cycles. As you can imagine, time series forecasting is tedious and does not lend itself to a short and simple example. However, it is helpful at least to know that time series techniques exist.

  SUMMARY

  This chapter has described the quantitative tools that perform the following functions:

  Sort out complex problems with decision trees

  Determine the value of cash received in the future—cash flow analysis and net present value analysis

  Quantify uncertainty with probability theory

  Determine relationships and forecast with regression analysis and other forecasting techniques

  These are practical tools that MBAs use to meet business challenges. They give MBAs the power to make informed decisions and to distinguish themselves on the job.

  KEY QA TAKEAWAYS

  Decision Trees—A way to graphically show and quantify multiple outcomes of a business decision

  Sunk Cost—Investments made in the past that have no bearing on future investment decisions

  Expected Monetary Value (EMV)—The blended value of a decision based on the probabilities and values of all possible outcomes

  Accumulated Value—The total future value of cash flows with all earnings reinvested

  Net Present Value (NPV)—The total present value of all cash flows “discounted” to today’s dollars

  Internal Rate of Return (IRR)—The discount rate that makes the net present value of the cash flows equal zero in today’s dollars