Chapter 3&4 Lecture Notes

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 15

Economics for Managers – Lecture Notes Ch.

3&4
Chapter 3
Objective Function – A function that describes firm behavior, it usually defines profit. It will
then be turned into a maximization or minimization problem.

Maximization Problem – Problem used to find optimal (maximum values of the objective
function, usually profit or output.
Minimization Problem – Problem used to find optimal (minimum) values of the objective
function, usually a cost function.
Choice Variable – Variables under the control of the firm, that determine the value of the
objective function. In this case it’s “q”, the individual firm gets to choose how much to produce.

Discrete Variable – Choice variables that can only take on limited values like integers: 1, 2, 3, 4

Continuous Variable – Choice variable that can take on any value between two points.

Unconstrained Optimization – Optimizing an objective function over all possible values of the choice
variable (q) without any conditions

Constrained Optimization – Optimizing an objective function over all possible values of choice variable
(q) where some other condition also holds. Usually a budget constraint.

Marginal Analysis – Using infinitesimally small changes to find a maximum/minimum in the objective
function.
Simple Optimization Example: Running a Bookstore (Unconstrained)

 Every business owner runs into the same problem with a downward sloping demand curve
o If you lower your price, you sell more units, but you get less per price
o If you increase your price, you will get more per unit, but you sell fewer units.
 What should you do?
o One solution is to calculate Total Revenue (TR = Price * Quantity)
An Easier Way?

1. Find Inverse Demand P(q) if:

2. Define Total Revenue as P(q) * q

3. Max Total Revenue

Net Benefits
 Most of these types of problems take this basic form NB = TB-TC
 NB – Net Benefit, TB – Total Benefit, TC – Total Cost
 You will pick some level of activity (A) (output q in the book example)
 The activity A is usually assumed to have two properties.
o Always increasing in TB but at a decreasing rate.
o TC is always increasing, usually at an increasing rate.
 Under these conditions, the value of A that yields the largest difference between TB and TC is
the one that will maximize Net Benefits (NB) aka Profit or Utility.

Marginal Benefit – Gives the measure of the change in benefits given a change in the activity (at a
particular point or between points).

Marginal Cost – Gives the measure of the change in costs given a change in the activity (at a particular
point or between points).

 MB and MC define the slopes of the tangent lines to the TB and TC curves.
Total Costs – Sum of all costs of operation including fixed costs and variable costs, does NOT include
sunk costs.

Fixed Costs – Don’t vary. Costs that do not vary with output, must be paid

o Insurance, License Fees, Rent


o Change in fixed costs is not going to change the slope of either of these lines but will
move it up or down.
o Do not matter for marginal decision making, because they just vertically shift the TC
curve. (Slopes don’t change)

Variable Costs – Cost that DO vary with output.

 Parts, Labor, Electricity

Sunk Costs – Costs that have already been paid and cannot be recovered. Should not be considered in
future decision making.

 Time, Worthless Investments

Constrained Optimization

 The optimal solution, the one that maximizes utility, is one in which you consume in a way to
equalize the per dollar marginal benefits equal to one another.
 When you have a budget, resource, or time constraint, you have to ration the scarce resources.
 Where do these conditions come from?

 They actually fall right out of a maximization problem.


 Imagine I have utility over beer b and wings w defined as

 Graphing beer consumed against total happiness (holding wing consumption constant)

Chapter 4
Parameters and Regression
 A, b, and c are called parameters, and they need to be estimated.
 Parameters are coefficients in an equation that determine the exact mathematical relation
among the variables in the equation.
 We get an estimate of the parameters using a technique called regression analysis.
 Regression analysis - technique of fitting a line through a series of points to minimize some
error function. The most common error function used is squared error.
 Regression analysis is also used to find the values of parameters.
o Uses data on economic variables to determine a mathematical equation that describes
the relation between the economic variables.
o Objective of a regression analysis is to find.
 Estimates of the true slope and intercept parameters
 The straight line that best fits a scatter of data points.
 A Regression equation shows the expected level of a dependent variable given levels of
explanatory variables.
Variables
 Regression is used to determine the relationship between dependent variables and explanatory
variables.
 Dependent Variables - variable that is said to “depend" or be “caused by" the explanatory
variables.
o Total cost in our example
 Explanatory Variables - variables that “explain" or “cause" certain values of the dependent
variable.
o Quantity produced in our example.
Linear Model
 In a linear model we ASSUME that the relationship is linear,
Y = a + bX
o a - “Intercept” parameter, where the line crosses the Y axis
o b - “Slope” parameter, it is the slope of the line
 It is called a liner model because that is the generic equation of a line.
The True Model
 For regression analysis to work, and to be meaningful, there has to be an actual or true
relationship between the variables.
 X and Y need to actually be related to one another in some way.
 The true relationship is unknown, that is the point of estimation, but it must exist or this whole
enterprise is a waste of time.
o Pickles and pregnancy
o Hot dogs and horseshoes
Example:
 If that is the “true model,” the actual relationship, why don’t all of my observed points fall on the
line? The data probably looks something like this,
 Ads are not the only thing that explain total dollar sales S!
o local income
o weather
o preferences
o word of mouth
 We lump everything else into the random error or error term (e)
 Random Error - all of the unobservable factors that influence S other than A

 Scatter Diagram - We observe a bunch of ad expenditure and sales points.

Method of Least Square or Ordinary Least Squares (OLS)


 Idea that the sample regression line that is most likely to match the true regression line is the
line that minimizes the sum of the squared distances from each sample data point to the
sample regression line.
 How do we want to punish ourselves for predicting wrong?
 Residual - The vertical distance between the actual observed value and our predicted value
Yˆ.
o Yi − Yˆ = ei
Minimizing the Errors

We want to minimize the error, but how do you want to do that?

1. We could pick a line that minimizes the sum of all of the absolute values.

As it turns out, there are multiple lines that will do this.


2. Minimize the squared error. With the squared error, the line is unique.

Fitting the Line

 Time-Series data set – data collected over time for a specific firm.
 Cross-Sectional data set – data collected from several different firms at a given time.
 It can be shown that the line that minimizes the sum of the squared error is unique, which
is super convenient.

 That line is defined:

o
Estimates and Estimators

 bˆ and aˆ are estimates found using the Least Squares estimator.


 Estimator - is a method of estimation. There are multiple ways that we could estimate Y
o One estimator is the average of Y to which Y¯ would be the estimate.
o There are a bunch of estimators each with their own estimates.

Statistical Significance

Statistically Significant - we have sufficient evidence (predicated on our assumptions being true) that the
TRUE value of the relationship between taxes and population (b) is not zero.

 b v.s. bˆ - One is the true value we do not observe, the other is an estimate based o ff of a sample.
 We are taking a sample and making an inference about all possible cases using that sample.
 A parameter is said to be statistically significant if there is sufficient evidence that the: true value
of the parameter does not equal zero.

Hypothesis Testing

Hypothesis Test - A technique where we estimate the probability of an outcome under some
hypothesized value.

 Assume for a moment that there is no true relationship between taxes levied and population.
 A typical hypothesis test calculates the probability that the sample of observations we pulled
would find a value of 209 given that there is no actual relationship between taxes and population.

Intuition Behind Hypothesis Testing

 There is no relationship between a person’s biological sex and their eye color.
 However we could still test this. There is a true proportion, we could find it by grabbing all the
people with brown eyes and counting, but that is impossible.
 We could walk around and collect observations (take samples of the population) on eye color and
sex.
 Each day we collect a sample, there will be a di fferent proportion of male/female that have brown
eyes, just due to randomness.
 Our estimate is the proportion of male/female with brown eyes.

Distribution of bˆ
 Relative frequency distribution - The distribution (and relative frequency) of values b̂ can take
because observations on Y and X come from a random sample.
 We can treat each day as a sample, each sample has its own estimate, and we could do this a lot.
 We could save each one of these estimates, and before long we have a distribution of the
estimates.

 Imagine we get a sample with a value of .9


 We ask ourselves what is the probability of getting .9 or something more extreme given the true
value is .5?
 It is represented by the area under the curve to the right of .9

 Notice that the distribution is centered around the actual value of b


 Unbiased Estimator - an estimator that will, on average, equal the true value.
o In other words, the distribution of the estimator is centered at the actual value of b
 THE CENTRAL LIMIT THEOREM: the sample means will be normally distributed, and
centered at the actual value of b. Just like each of our samples were centered at .5.
 https://www.khanacademy.org/math/ap-statistics/sampling-distribution-ap/sampling-distribution-
mean/v/central-limit-theorem

Back to Hypothesis Testing


 Imagine further that we do not know the true value for the ratio but we are hypothesizing it is .5,
and at the end of our day we get a ratio of .52 for our estimate.
 We can test if the sample ratio of .52 is statistically di fferent from our hypothesized value of .5.
 We would do this with a t-test.
 The t-test tells you the probability of the outcome you actually observed, under the null (or
hypothesized value) with a t-statistic.

The t-statistic

 The t-statistic is defined:

 bh - is the hypothesized value of the true b


 Sbˆ is the standard error of the estimate, it is a measure of the variance of bˆ within the sample.
 If the hypothesized value is zero (bh = 0), which is typically the standard for regression analysis
then the t-stat collapses:

Calculating Under the Null

 The larger the t-stat, the more confident we are in rejecting the null hypothesis. We usually pick some
critical value of t(more on this in a minute), prior to the investigation, that would cause us to reject
the null or hypothesized value. This value is typically anywhere from 1.96 to 2.00 in absolute value.
 For simplicity assume Sbˆ = .1 do we reject a bh = .5?

 0.2 is a far cry from 1.96, so we would fail to reject this null.
 This makes sense because the hypothesized value is the true value.

Errors
 Type I Error - finding a parameter estimate to be significant when it is not.
o We find a relationship between variables that does not really exist. We reject the null
hypothesis (bh = 0) for pop munic when we shouldn’t.
 Type II Error - We fail to reject a null that should have been rejected.
o We find no relationship between variables that does indeed really exist.
Level of Significance/Confidence

Level of Significance - How willing are we to commit a Type I error (reject a true null)?

 Just due to randomness in samples it is possible that we collect a sample that would cause us to
reject a true null of .5.
 If we hypothetically grabbed 100 samples what percentage of those are we willing to reject a true
null?
 The standard is 5% we set the critical t-values such that we would incorrectly reject a true null.
 If the t-statistic value is greater than the t-critical , meaning that it is beyond it on the x-axis (a
blue x), then the null hypothesis is rejected and the alternate hypothesis is accepted. However, if
the t-statistic had been less than the t-critical value (a red x), the null hypothesis would have been
retained.

Level of Confidence - is (1 - Level of significance), if the Level of significance = 5% then we are 95%
confident in our decision to accept/reject the null.

The p-value

The p-value has two interpretations.

1. It is the probability of a Type I error (rejecting a true null) given that the null.
a. (bh = 0) is true.
b. The exact level of significance for a t-ratio associated with a parameter estimate.
2. it is the probability of finding the bˆ that we found, or something more extreme, given
a. that the true b = 0.

So what is our predicted value of Y ?

Yˆ = aˆ + bˆX

 Just plug in the values


o taxesˆ = 2, 354, 835 + 208.699(pop munic)
 I expect a city with 100,000 people to collect

taxesˆ = 2, 354, 835 + 208.699(100, 000) taxesˆ = 23, 224, 735

Coefficient of determination – Measures the fraction of the total variation in the dependent variable that
is explained by the regression equation.
 The value of R^2 can rang from 0 to 1.
 If the value R^2 is high, there is high correlation between the dependent variable and
independent variables and vice versa.

F-Statistic – Provides a measure of the ratio of explained variation (in the dependent variable) to
unexplained variation.
 To test whether the overall regression equation is significant, this statistic is compared
with the critical F value obtained from an F-table.
 K-1 and n-k degrees of freedom.
 If the F-Stat exceeds the critical F-value, the regression equation is statistically
significant.

Multiple Regression

 In this equation, Y is the dependent variable


 A is the intercept parameter
 X, W, and Z are the explanatory variable.
 B, c, and d are the slope parameters for each of these explanatory variables.
 Uses two or more explanatory variables.
Log-Linear Regression Models – Another kind of nonlinear equation that can be estimated by
transforming the equation into a linear form in which Y is related to one or more explanatory variable in a
multiplicate fashion.

This nonlinear function form is used to estimate demand functions and to estimate production functions,
which is particularly useful because the parameters “b” and “c” are elasticities.

Using this form of nonlinear regression, the elasticities are estimated directly: The parameter estimates
associated with each explanatory variable are elasticities. (The parameter a, however, is not an elasticity).

To estimate the parameters of this nonlinear equation, it must be transformed into a linear form. This is
accomplished by taking natural logarithms of both sides of the equation. Taking the logarithm of the
function results in:

You might also like