1. Assume that the assumptions underlying the CAPM hold. Evaluate whether the following statements are true or false.

A. A firm with high variance will have a higher beta than one with lower variance.

True False

B. A portfolio is efficient if it has no unsystematic risk.

True False

C. A firm which is highly correlated with the market will have a higher beta than one which is less correlated.

True False

D. If the variance of the overall market goes up, the betas of all firms will go down

True False

E. A well managed firm will have a lower beta than a badly managed firm

True False

F. The market portfolio is efficient and therefore contains only the best stocks in the market.

True False

G. A risk lover will hold the riskiest stocks in the market, where a risk averse investor will hold the safest stocks.

True False

H. If the riskless rate increases, the slope of the capital market line will decrease

True False

2. You are in a world where there are only two assets, gold and stocks. You are interested in investing your money in one, the other or both assets. Consequently you collect the following data on the returns on the two assets over the last six years.

__ Gold Stock Market__

Average return 8% 20%

Standard deviation 25% 22%

Correlation -.4

A. If you were constrained to pick just one, which one would you choose?

B. A friend argues that this is wrong. He says that you are ignoring the big payoffs that you can get on gold. How would you go about alleviating his concern?

C. How would a portfolio composed of equal proportions in gold and stocks do in terms of mean and variance?

D. You now learn that GPEC (a cartel of Gold producing countries) is going to vary the amount of gold it produces with stock prices in the US. (They will produce less gold when stock markets are up and more when it is down.) What effect will this have on your portfolios? Explain.

3. You have just learnt about the Markowitz frontier and are eager to put it into practice.

A. What steps would you have to go through?

- Defining universe

- Data requirements

- Calculations and Statistics

B. Assume you have succeeded in deriving the frontier. How would you go about providing investment advice to naive investors who come to you for recommendations? If you use this as a basis for your investment recommendations, what assumptions are you making?

C. How would the frontier that you have calculated change if

- A massive disaster wiped out a hundred firms that used to be part of your universe

- You ignored foreign stocks initially, but now added them on

- A breakthrough in technology occurs, which cuts in half the cost of making computer chips

4. Assume that the average variance of return for an individual security is 50 and that the average covariance is 10. What is the expected variance of a portfolio of 5, 10, 20, 50 and 100 securities. How many securities need to be held before the risk of a portfolio is only 10% more than the minimum?

5. The CAPM has been attacked on several different dimensions. Summarize the criticism of the CAPM and evaluate whether it is justified.

6. You are comparing the arbitrage pricing model to the capital asset pricing model.

A. What are the similarities between the two models? What are the differences?

B. You are estimating the expected returns for a stock using both
the CAPM and the arbitrage pricing model. Under what conditions
would you get the same expected return? If the expected returns
are different, how would you explain the difference?

Large amounts of data are often compressed into more easily assimilated
summaries, which provide the user with a sense of the content,
without overwhelming him or her with too many numbers. There a
number of ways in which data can be presented. One approach breaks
the numbers down into individual values (or ranges of values)
and provides probabilities for each range. This is called a "distribution".
Another approach is to estimate "summary statistics"
for the data. For a data series, X_{1}, X_{2},
X_{3}, ....X_{n}, where n is the number of observations
in the series, the most widely used summary statistics are as
follows ñ

ï the mean (m), which is the average of all of the observations in the data series

ï the median, which is the mid-point of the series; half the data in the series is higher than the median and half is lower

ï the variance, which is a measure of the spread in the distribution around the mean, and is calculated by first summing up the squared deviations from the mean, and then dividing by either the number of observations (if the data represents the entire population) or by this number, reduced by one (if the data represents a sample)

When there are two series of data, there are a number of statistical
measures that can be used to capture how the two series move together
over time. The two most widely used are the correlation and the
covariance. For two data series, X (X_{1}, X_{2},.)
and Y(Y,Y... ), the covariance provides a non-standardized measure
of the degree to which they move together, and is estimated by
taking the product of the deviations from the mean for each variable
in each period.

The** **sign on the covariance indicates the type of relationship
that the two variables have. A positive sign indicates that they
move together and a negative that they move in opposite directions.
While the covariance increases with the strength of the relationship,
it is still relatively difficult to draw judgements on the strength
of the relationship between two variables by looking at the covariance,
since it is not standardized.

The correlation is the standardized measure of the relationship between two variables. It can be computed from the covariance ñ

The correlation can never be greater than 1 or less than minus 1. A correlation close to zero indicates that the two variables are unrelated. A positive correlation indicates that the two variables move together, and the relationship is stronger the closer the correlation gets to one. A negative correlation indicates the two variables move in opposite directions, and that relationship also gets stronger the closer the correlation gets to minus 1. Two variables that are perfectly positvely correlated (r=1) essentially move in perfect proportion in the same direction, while two assets which are perfectly negatively correlated move in perfect proporiton in opposite directions.

A simple regression is an extension of the correlation/covariance concept which goes one step further. It attempts to explain one variable, which is called the dependent variable, using the other variable, called the independent variable. Keeping with statitical tradition, let Y be the dependent variable and X be the independent variable. If the two variables are plotted against each other on a scatter plot, with Y on the vertical axis and X on the horizontal axis, the regression attempts to fit a straight line through the points in such a way as the minimize the sum of the squared deviations of the points from the line. Consequently, it is called ordinary least squares (OLS) regression. When such a line is fit, two parameters emerge ñ one is the point at which the line cuts through the Y axis, called the intercept of the regression, and the other is the slope of the regression line.

The slope (b) of the regression measures both the direction and the magnitude of the relation. When the two variables are positively correlated, the slope will also be positive, whereas when the two variables are negatively correlated, the slope will be negative. The magnitude of the slope of the regression can be read as follows - for every unit increase in the dependent variable (X), the independent variable will change by b (slope). The close linkage between the slope of the regression and the correlation/covariance should not be surprising since the slope is estimated using the covariance ñ

The intercept (a) of the regression can be read in a number of ways. One interpretation is that it is the value that Y will have when X is zero. Another is more straightforward, and is based upon how it is calculated. It is the difference between the average value of Y, and the slope adjusted value of X.

Regression parameters are always estimated with some noise, partly because the data is measured with error and partly because we estimate them from samples of data. This noise is captured in a couple of statistics. One is the R-squared of the regression, which measures the proportion of the variability in Y that is explained by X. It is a direct function of the correlation between the variables ñ

An R-squared value closer to one indicates a strong relationship between the two variables, though the relationship may be either positive or negative. Another measure of noise in a regression is the standard error, which measures the "spread' around each of the two parameters estimated- the intercept and the slope. Each parameter has an associated standard error, which is calculated from the data ñ

If we make the additional assumption that the intercept and slope estimates are normally distributed, the parameter estimate and the standard error can be combined to get a "t statistic" that measures whether the relationship is statistically significant.

For samples with more than 120 observations, a t statistic greater than 1.66 indicates that the variable is significantly different from zero with 95% certainty, while a statistic greater than 2.36 indicates the same with 99% certainty. For smaller samples, the t statistic has to be larger to have statistical significance.

The regression that measures the relationship between two variables becomes a multiple regression when it is extended to include more than one independent variables (X1,X2,X3,X4..) in trying to explain the dependent variable Y. While the graphical presentation becomes more difficult, the multiple regression yields a form that is an extension of the simple regression.

The R-squared still measures the strength of the relationship, but an additional R-squared statistic called the adjusted R squared is computed to counter the bias that will induce the R-squared to keep increasing as more independent variables are added to the regression. If there are k independent variables in the regression, the adjusted R squared is computed as follows ñ