Dilbert's Problems
Dilbert's Problems
Dilbert's Problems
Abstract
Fifty numerically-oriented, skill-building problems in applied mathematics are offered for engineering
graduate students. The intention is to solve slightly unfamiliar problems using widely available tools like
Matlab and Maple. These problems may help students with computer programming in addition to nonlinear
simultaneous equations, differential equations, boundary value problems, optimization, numerical integration
and elementary probability. The student will usually have to think a little about how the problem should
be set up for solution. Partial answers are provided.
Introduction
I have frequently wished for a set of small applied mathematics problems that might help a beginning engineering
graduate student develop a few practical skills. I eventually put some together myself. I have used Matlab and
Maple to get reasonable numerical answers, as indicated below.
Problem statements
1. Given x
+ 2.6 x + 7.9 x3 = 3.2 sin(xt), and x(0) = 1.2, x(0)
x + y = 2,
4. Consider the matrix
7
a
13
15
i 1+i
i
i
0 14
12
0
,
0
0
i
i
and numerically seek real a for which the matrix has an eigenvalue with magnitude 2.6.
5. Same as above, only now ignore the magnitude and let the eigenvalue have an argument of 37 (see
Matlabs phase).
6. Same as problem 4, only now seek complex values of a and plot an appropriate curve (or curves) on the
complex plane.
1 a
3 6
11. Using what you learnt from the last two problems find, to at least 3 significant digits,
Z 2 0.7
x cos(1 + ln(1 + x))
p
dx.
sin(x) ln(1 + x)
0
12. Find by integrating on the positive real line
Z
ln(1 + x) cos (1 + x)
dx.
x0.6
0
The integral converges slowly, so the contribution from x (A, ) for some large A should be computed
from a suitable asymptotic approximation. Similarly consider a series for small x.
13. Using complex variables, reduce the previous integral to one on the imaginary axis. Compare results.
14. Given x
+ (2 + cos t) x = 0 and the boundary conditions x(0) = x(2.2) = 0, find the first five positive real
values of for which nonzero solutions exist. All ODE solution should be done using Matlabs ode45.
15. For problem 14, set up a finite difference scheme in Matlab, and find the first five values using Matlabs
eig.
16. Again for problem 14, assume a solution in the form of a Fourier sine series, and find the first five values
using Maple.
17. Learn Floquet theory. For the system x
+ sin(t) x + cos(t)x = 0, find the Floquet matrix (monodromy
matrix).
18. For the system x
+ sin(t) x + a cos(t) x = 0, seek real values of a such that the Floquet matrix has an
eigenvalue of magnitude 5.
19. Analytically find the solution, for x (0, 3), of
y + 0.3 y + 3y =
y dx,
y(0) = 0,
y(3) = 1.
ey dx,
0
y(0) = y(3) = 0.
s cos y(s) ds =
0
22. Consider
y (x) + y (x) + y(x) =
ey(s) ds,
y(0) = y(3) = 0.
1 + s y(s) ds,
y(0) = 1,
and that there exists one or more real a such that y(a) = y (a) = 0, find y and plot it for x between 0 and
a.
24. Given that
y + 0.3 z y + 3y 3 = sin t,
z + yz = 0,
and
y(0) = 1,
y(1) = 0,
z(0.5) = 1,
A=
1
2
1
3
0 2
1
0
3 1
3
0
3
7
2
b=
1
0
0
4
2
Find the column vector x which minimizes the error e = Ax b in the 2-norm (or least squares sense).
26. Consider the function f (x) = abs(x) for x [1, 2]. Using 300 equally spaced points on this interval,
and considering polynomials of fourth order, approximate f (x) in the least squares sense. Show f and its
approximation on a plot. Separately plot the approximation error. (Use problem 25.)
27. For the matrices in problem 25, consider x such that Ax b elementwise. In addition, x 6 and x 5,
also elementwise. Maximize the sum of the first 2 elements of x. Use Matlabs linprog.
28. Reconsider problem 25. Now find the vector x which minimizes e in the infinite-norm (maximum error
magnitude sense). (Problem 27 is relevant. You can define an auxiliary variable which bounds the errors
using linear inequalities, and then minimize that.)
29. Reconsider problem 26, only now approximate the function in the maximum absolute error sense. Show
f and its approximation on a plot. Separately plot the approximation error. (Use problem 27.)
30. Reconsider problem 27, only now minimize the sum of the squares of the first 2 elements of x. Use Matlabs
quadprog.
31. Consider the van der Pol oscillator, x
+ x x2 1 + x = 0. The system has a unique stable periodic
solution. Find the time period, accurate to at least 3 significant digits, for = 3.
32. In problem 31, find to at least 3 significant digits if the time period is known to be 8.
33. In problem 31, find the steady state RMS value of x.
34. Using a method of your choice that you implement in Matlab, plot a closed curve in the x-y plane that
satisfies
y2
x4 + 0.1x3 + 2x2 x + y 4 0.1y 3 +
+ 0.1 xy = 100.
1 + y2
35. Numerically obtain the area and perimeter of the closed curve in problem 34.
36. Consider x
+ x 5 + x = 0, with x(0) = 1000 and x(0)
41. Let f (x) = eax+ 1+x cos bx2 + ln(1 + x) . Given that
Z
f (x) dx = 2
and
xf (x) dx = 0,
1
A= 1
a
2 3
3
0 ,
2 3
1
1 + 1 21.1 + 1 31.1 + 1
44. Let A be a 2 2 matrix whose elements are independent random numbers each uniformly distributed on
(0, 1), i.e., from Matlabs rand. What is the probability that the eigenvalues of A both lie inside the unit
circle? What is the probability of the same if the elements of A are normally distributed with mean 0.5
and standard deviation 0.5? (Numerics only.)
45. Let x
+ ax + 6bx + x3 = 0, with x(0) = 1 and x(0)
3 e2x
7
7
+
tan1 2x3 + 5 + .
2x
10 1 + e
10
20
49. Use a Monte Carlo simulation to find the mean and standard deviation of the number of times we need
to toss a coin in order to get 7 heads in a row.
50. Consider an equilateral triangle of side 3 units, lying in the x-y plane with one vertex at the origin, and one
edge along the x-axis. Remove the region inside the circle of radius 0.8 with centre at (1.0,0.5). Further
remove the region inside the circle of radius 0.7 with centre at (1.0,1.5). Finally, remove all points below
the curve y = 0.3 x2 . Use Monte Carlo integration to find the remaining area.
Partial answers
1. x(2.3) = 0.7735841.
2. The ODE is nonlinear. What we can solve easily is an initial value problem for any given a. Large negative
a might make numerical solutions unreliable, but let us first suppose that Matlab tolerances of reltol and
abstol set to 1012 will give acceptable results. Tighter (smaller) values are often problematic anyway,
because roundoff errors may begin to exceed truncation errors.
Given a and the initial conditions at t = 0, the computed value x(2.3) is just some function f (a), which
we can compute and plot, as shown below (figure 1). A horizontal line at 0.6 is plotted for reference,
and every intersection gives a solution for a. It is seen that there are infinitely many solutions for negative
a, but a few easy ones for a > 0. In computing f (a), it helps to define a as a global variable inside the
derivative-evaluating m-file given to ode45. Then a can be set from the Matlab command window, or
from any m-file, as needed.
4
3
2
1
f(a)
0
-1
-2
-3
-4
-4
-2
3. Newton-Raphson method with numerically estimated Jacobians. You may adjust the termination tolerance, size of the finite difference step for estimating the derivative, and allowed number of iterations. fun
is the name of the m-file which evaluates some vector function f (x), where we seek x for which f (x) = 0.
function x=newton(fun,x)
% numerically estimates derivatives and implements Newton-Raphson method
n=length(x);
epsil=(1e-5*max(1,norm(x)));
pert=eye(n)*epsil;
iter=0; nmax=60;
ee=feval(fun,x);
while (norm(ee)*max(1,norm(x))>1e-10)*(iter<nmax)
iter=iter+1;
for k=1:n
D(:,k)=(feval(fun,x+pert(:,k))-ee)/epsil;
end
x=x-(D\ee);
if nargin == 3
disp(x)
end
ee=feval(fun,x);
end
disp(iter), disp(iterations)
if (iter == nmax)+(abs(x)==inf)
disp(did not converge)
end
Having newton, we make a new m-file as follows:
function z=p3(x)
z=[x(1)+x(2)-2; x(1)^2+x(2)^2-3];
Then, in Matlab, supplying an initial guess to newton, we get:
>> newton(p3,[0;1])
5
iterations
ans =
0.292893218812655
1.707106781187345
4. Consider the following m-file:
function z=p4(a)
A=[7,a,0,14; -13,15,12,0; i,1+i,0,0; i,i,i,i];
z=abs(eig(A))-2.6;
[m,n]=min(abs(z));
z=z(n);
Note that the magnitude of the eigenvalue could be slightly below or above 2.6 (the desired value), and
the sense of the difference should not be lost. So we do not want abs( abs(eig(A)) - 2.6 ).
Now, with newton as above, many random initial guesses fail to give solutions. However, after a few
random inputs, I found the following two solutions:
1.9470938,
54.7383329.
5.7292335,
6
84.6838382.
15
10
5
5
10
15
Figure 2: Eigenvalues on the complex plane: 4 curves for 4 eigenvalues. See problems 4 and 5.
You might like to plot all the eigenvalues, in the complex plane, as a is varied. Superimpose a circle of
radius 2.6 (for the previous problem) and a line through the origin at 37 (for this problem). The resulting
figure (figure 2) is consistent with the numerical results (2 solutions for problem 4 and 3 solutions for this
one).
6. This problem involves numerical continuation of solutions. Since a is complex, let us write a = x + iy and
note that we have two things to find. Given one solution, like 1.9470938 + 0i, we want to find a nearby
solution (say at a distance of 0.01 on the complex plane). Then use that newly found solution to obtain
yet another solution, a further distance of 0.01 away.
To this end, we define a0 = 1.9470938 to begin with, and use the following m-file:
function z=p6(x)
global a0
a=x(1)+i*x(2);
A=[7,a,0,14; -13,15,12,0; i,1+i,0,0; i,i,i,i];
z=abs(eig(A))-2.6;
[m,n]=min(abs(z));
z=z(n);
z=[z;norm(x-a0)-0.01];
Then, at the Matlab command,
>> a0=[-1.9470938;0];
>> a_new=newton(p6,a0+randn(2,1)/4)
5
iterations
a_new =
-1.944288035329471
-0.009598316759364
>> plot(a0(1),a0(2),.)
>> hold on
>> for k=1:100; a_guess=2*a_new-a0; a0=a_new; a_new=newton(p6,a_guess);
>> plot(a0(1),a0(2),.); end
The results obtained are shown in figure 3. Further results can be obtained in the same way.
7. Similar to problem 4. Note only that the dot product of normalized eigenvectors will give you the cosine
of the included angle, and you should take the absolute value before you compare with cos 35 .
7
imaginary part of a
-0.2
-0.4
-0.6
-0.8
-1
-2
-1.8
-1.6
Real part of a
-1.4
8. Similar to problem 6, except that the evaluation of the function involves numerical integration of a differential equation.
9. Simpsons rule:
function z=simp(a,b,N,fun)
% note: "fun" is a string; it is a file name where the integrand is
% evaluated; it should evaluate the function on arrays (be vectorized)
if floor(N/2)*2 == N
N=N+1;
end
x=linspace(a,b,N); h=x(2)-x(1); y=feval(fun,x);
w=2*ones(1,N-2); w(1:2:end)=2*w(1:2:end); w=[1,w,1]*(h/3);
z=y*w;
The integrand:
function z=p9(x)
z=sin(2*x.^3./(1+log(1+x)));
In Matlab:
>> simp(0,2,101,p9)
ans =
0.427570776769777
>> simp(0,2,201,p9)
ans =
0.427571207725242
10. From MAPLE (partially evaluating to numerical values):
0.5403023059 x4/5 0.5713198318 x1/5 0.2701511530 x6/5 + 0.2502233507 x11/5 0.2449287360 x16/5
11. The above approximation is fine for direct analytical integration from x = 0 to (say) x = 0.01. The
remainder (from 0.01 to 2) can be done using Simpsons rule. There is some arbitrariness in choosing the
limit of 0.01. The power series expansion shows that simply throwing away the integral from 0 to even
1010 would lead to an error in the second decimal place.
12. In this problem, there is a weak singularity at x = 0. Moreover, the domain of integration is infinite. The
integrand decays slowly and oscillates with decreasing frequency. Thus, for direct evaluation on the real
line, attention is needed both for small x and large x.
As shown above, we can use a power series expansion for small x and evaluate the integral to some suitable
small positive value . I get (from MAPLE)
7
19
12
29
34
39
5
10
5
20
25
17
cos(1) 5
sin(1) 10
cos(1) 5 +
sin(1) 10 +
cos(1) 10
sin(1) 10 + ,
7
19
12
87
136
156
Z
ln(1 + x) cos (1 + x)
f (A) =
dx.
x0.6
A
Here, note that
cos(1 + x)
2 x
(u 1)0.2
1+ A
We could expand
h(u) =
(1)
2 ln(1 + (u 1)2 )
(u 1)0.2
(2)
for large u to obtain some further analytical simplification, but it is unnecessary. Integrating by parts in
Eq. 1 (see Eq. 2), we obtain
f (A) (h(u) sin(u) + h (u) cos u h (u) sin u h (u) cos u + )|1+A .
A small trick we can use is to let 1+ A be n for a large positive integer n, so that the sine terms are zero
in the expansion above. Using MAPLE to compute five derivatives of h, we obtain a long expression that
decays rather slowly, such that its asymptotic behavior is clearly established for something like u > 400
(a visual estimate from a graph not reproduced here). The corresponding value of A is inconveniently
large:
A = 160000 2 + 1.
So we do the following (taking an even larger A for the asymptotic behavior):
f (100)
1200
11
Evaluating the integral using Simpsons rule (using 1000000 points, after some convergence checks), we
have
f (100) 5.8422584.
We are finally left with one integral to get our final approximation, which is:
Z 100
ln(1 + x) cos (1 + x)
dx + 5.8422584,
0.0041938 +
x0.6
0.05
which (using Simpsons rule again) is
Z
ln(1 + x) cos (1 + x)
dx 1.3703.
x0.6
0
Im(x)
x-plane
8
Re(x)
13. The above evaluation of the integral was laborious. Now we will use contour integration in the complex
plane. To this end, we rewrite the integrand as
!
ln(1 + x)ei(1+ x)
Re
.
x0.6
The real part can of course be taken after evaluation of the integral; and the evaluation of the integral
can use a suitable contour in the complex plane.
Consider the contour in the first quadrant, shown in figure 4, to be traversed counterclockwise. There
is a small circular arc near the origin, which makes a negligible contribution. Similarly, the outer circle
contributes terms that decay exponentially as its radius grows to infinity. There being no singularities
within the contour, the contour integral is zero. The net result is that the integral from zero to infinity
along the real line is the same as that from zero to infinity along the positive imaginary axis. Having
identified the line integral needed, we can again take real parts before integration, obtaining
!
Z
Z
ln(1 + x) cos (1 + x)
ie0.6 2 i ln(1 + iy)ei(1+ iy)
dy.
Re
dx =
x0.6
y 0.6
0
0
Since evaluation of the intregral is numerical, the integrand need not be simplified further. Taking three
portions on the y axis (somewhat arbitrarily from 1013 to 103 , 103 to 103 , and from 103 to 106 ), with
about 2 million points each time (takes less than a second anyway), Simpsons rule gives the above integral
as
Z
ln(1 + x) cos (1 + x)
dx 1.37016.
x0.6
0
I believe this approximation is much more accurate than the previous one.
14. In this problem, since the boundary conditions are zero, the solution x can be multiplied by any scalar
quantity. It is clear that x(0)
6= 0 (because the solution of the initial value problem would then give
x 0). By these two observations, we may take x(0) = 0 and x(0)
0.6
0.4
f()
0.2
0
0.2
0.4
10
15
20
25
Figure 5: Figure for problem 14. Intersections with the horizontal axis give the required -values. The circles
were computed as per problem 15.
N
X
ak sin
k=1
kt
2.2
substitute into the ODE, multiply sequentially by each sine term and set the integral to zero, to obtain
N linear homogeneous coupled equations involving a parameter . Then find values such that the
coefficient matrix is singular.
The following lines of Maple code (using 6 sine terms)
N:=6: x:=add(a[k]*sin(k*Pi*t/(22/10)),k=1..N): de:=expand(diff(x,t,t)+lambda*(2+cos(t))*x):
for k from 1 to N do eq[k]:= evalf(integrate(de*sin(k*Pi*t/(22/10)), t=0..22/10)); end:
with(linalg): A:=matrix(N,N): for k from 1 to N do for n from 1 to N do
A[k,n]:=coeff(eq[k],a[n]): end: end: dd:=det(A): evalf(solve(dd,lambda));
give the following results for the first 5 eigenvalues (with increasing, but acceptable, inaccuracy):
0.8378, 3.4545, 7.8151, 13.9193, 21.7849.
17. Writing the system in first order form using
q=
11
x
x
1
21.6237
0
1
18. Directly evaluating the Floquet multipliers for various values of a, we can plot the results to see some
solutions as in figure 6. The curves with dots are the multipliers, and the horizontal line is at 5. There
are 8 solutions within the range 5 a 5.
4
10
Floquet multiplier
10
10
10
10
0
a
R3
19. The point is that although y varies with x, 0 ydx is just some number. Call it a. Then we are really
solving
y + 0.3y + 3y = a, y(0) = 0, y(3) = 1,
with a as yet undetermined. The solution is analytically easy to find (if somewhat long), and we write it is
R3
(say) f (a, x). Now we set 0 f (a, x)dx = a as originally assumed, and find that a = 2.743245. Substituting
back into f (a, x), we obtain
y(x) = 0.9144 0.6080 e0.1500 x sin(1.7255 x) 0.9144 e0.1500 x cos(1.7255 x).
20. See the previous problem. Now we set
R3
0
y + 0.3y + 3y 3
a,
ey .
We now integrate these using initial conditions y(0) = 0, y (0) = b and z(0) = 0. The aim is to adjust a
and b so that y(3) = 0 and z(3) = a. The following Matlab code works:
function zdot=p20(t,z)
global a
zdot=[-0.3*z(1)-3*z(2)^3+a; z(1); exp(-z(2))];
and
function z=p20a(x)
global a
op=odeset(reltol,1e-10,abstol,1e-10);
a=x(1); b=x(2);
[t,z]=ode45(p20,[0,3],[b;0;0],op);
z=z(end,:);
z=[z(2); z(3)-a];
12
Using the Newton-Raphson method to find a zero of the vector function p20a, we obtain many solutions.
Some of the a-values obtained are
3.1085,
4.1565,
5.8713,
8.5398,
9.5796,
13.2869.
1
0
-1
-2
-3
-4
0
0.5
1.5
x
2.5
21. This is similar to problem 20 except you need to add on a third differential equation, as follows
y + 0.3y + 3y +
a,
ey ,
x cos y.
Now start with initial conditions y(0) = 0, y (0) = b, z(0) = 0, (0) = 0. Iteratively adjust a and b such
that z(3) = a and y(3) = 0. Using the Newton-Raphson method, I found
a = 2.2136,
b = 1.2309.
2
0
1 + s y(s) ds
1 + s y(s) ds.
1 + s y(s) ds = a,
a z,
1 + x y,
with initial conditions y(0) = 0, y (0) = 1 and z(0) = 0, from x = 0 to x = 2. Now adjust a such that
z(2) = a. Finally, knowing a, integrate one last time to x = 3. I found a = 28.8458 and y(3) = 10.0262.
23. This is a second order ODE. The initial condition y(0) is given, but y (0) must be chosen along with a to
satisfy the two conditions y(a) = y (a) = 0. Many solutions exist; some are shown (for both positive and
negative a) in figure 8.
13
3
2
1
0
1
2
3
5
10
Figure 8: Some solutions for problem 23. The common point (0, 1) is marked with a circle.
2.5
1.5
y, z
y, z
1.5
0.5
1
0
0.5
0.2
0.4
0.6
0.8
0.5
0.2
0.4
0.6
0.8
Figure 9: Two solutions for problem 24 (I found others as well). The curves for y and z may be distinguished
by noting z(0.5) = 1.
24. In order to solve an initial value problem, we need y(0) (which we have) along with y(0)
0.1
abs(x)
approximation
0.05
1.5
approximation error
0.5
0
0.05
0.1
0.15
0.2
0
1
0.25
1
4
3
2
1
0
-1
-2
-3
-4
-4
-2
negligible, and x 5 x. Solving the approximating equation, the time taken for x to become 1 is about
312.7 units. The match with numerics is good.
37. Using fminsearch I find a = 0, b = 0.21042, and g = 0.29028. The contour plot shows a matching minimum
(figure 12).
1
0.5
0.5
0.45
0
0.4
0.5
1
1
0.35
0.3
0.5
0.5
1 0
0 0
has elements whose magnitudes add up to 1, and the eigenvalues (0,1). Thus, the maximum distance
achievable between eigenvalues is at least 1. We will see below that the maximum in fact appears to be 1
(since we use numerical optimization, we will not have a proof).
We will use fminsearch. How to define the objective function? The sum of absolute values must add up
to 1, and this could be used as an explicit constraint. However, if A is a square matrix and a is a number,
and if is an eigenvalue of A, then a is an eigenvalue of aA. And so we can directly minimize (note the
minus sign)
|1 2 |
g = P
.
i,j |Aij |
Now the elements of A could be complex, so we presently seem to have 8 independent variables. But
the first element A11 can be assumed real. This is because if it is complex with phase , then its phase
can be set to zero by multiplying A with some a = ei . This multiplication leads to a rotation of the
eigenvalues about the origin in the complex plane, but does not affect their separation. So we seem to
have 7 independent variables. However, scaling up all the elements of A by any real multiplier a does
16
not change the objective function g, as discussed above, so we might as well take A11 = 1, leaving 6
independent variables.
Try the following Matlab function with fminsearch:
function z=eig_prob_1(x)
% assume x is 6X1 and real, and convert to 3X1 and complex.
x=x(1:3)+i*x(4:6); x=[1;x];
x=[x(1:2),x(3:4)]; z=eig(x); z=-abs(z(1)-z(2))/sum(sum(abs(x)));
The minimized objective function turns out to be 1, but the minimizing x is very large and changes from
run to run. I believe this is because many redundancies remain in the optimization problem formulation.
For example, similarity transformations (A P 1 AP ) will leave the eigenvalues unchanged but might
affect the sum of the absolute values of the elements of the matrix; and so every optimal A must have
P
the property that i,j |Aij | cannot be reduced by any similarity transformation (the optimization could
conceivably be carried out over the subset of matrices that possess this property). But I do not know how
to use this idea to reduce the number of explicit independent variables.
For our present purposes, the problem as originally stated is solved, and the maximum possible distance
is unity.
Comment: In such simple-minded uses of fminsearch, I usually try several runs with random initial
guesses, and in each case take the offered optimal x as a new initial guess to run it again, several times in
a row. Such persistence does not ensure finding the true optimum, but improves the odds.
39. Let the matrix be called B. Based on our (my) inability in the previous problem to reduce the number of
independent variables to the minimum possible value, let us here choose simplicity and allow every element
P
to be complex and unrestricted; the constraint of i,j |Bij | = 1 will be implicitly incorporated later.
Upon computing the 3 eigenvalues (1 , 2 , 3 ), we compute 2 complex numbers, namely e1 = 1 2 and
e2 = 1 3 . These two complex numbers may be also viewed as vectors in the plane, and the magnitude
of their cross product is twice the area of the triangle. Alternatively, we can rotate the vector e1 through
90 degrees by multiplying with i, and then take the dot product of the two vectors (ie1 and e2 ). At this
P
point, i,j |Bij | 6= 1. Scaling the elements of B uniformly with a factor will scale the eigenvalues by
2
X
and the area A by 2 . And so the computed area A should be divided by
|Bij | .
i,j
function z=eig_prob_2(x)
% input has 18 elements, convert to 9 complex elements
x=x(1:9)+i*x(10:18); x=[x(1:3),x(4:6),x(7:9)];
z=eig(x); z=[i*(z(1)-z(2));z(1)-z(3)];
z=-abs( prod(z)* cos(phase(z(1))-phase(z(2))) )/(sum(sum(abs(x))))^2/2;
As it did for the previous problem, fminsearch gives different optimizers for different random initial
1
guesses. But it settles frequently on the value (dropping the minus sign) 0.14433 = . The optimal
4 3
number may be obtained by considering a diagonal matrix, with the diagonal elements equal to one third
of each of the cube roots of unity. There is a local optimum at 0.125 which is frequently offered by
fminsearch, but persistence yields the correct result.
40. For this problem, we can think of each element as ei , suggesting 9 independent inputs. The number could
be reduced,
but is not necessary with fminsearch. The optimal value obtained (largest area) appears to
9 3
.
be
4
41. A straightforward use of Simpsons rule and Newtons method, both already covered. Try the Matlab
function:
17
function z=p41(q)
N=5001; a=q(1); b=q(2);
x=linspace(0,2,N); h=x(2)-x(1);
w=2*ones(1,N-2); w(1:2:end)=2*w(1:2:end); w=[1,w,1]*(h/3);
y=exp(a*x+sqrt(1+x)).*cos(b*x.^2+log(1+x)); z=y*w-2;
y=x.*y; z=[z;y*w];
Newtons method gives many solutions. A few are:
(a, b) = (0.2219, 0.3493),
(2.8845, 5.1847),
(1.0313, 2.5893),
and
(2.6977, 5.7385).
42. Matlab works fine with complex numbers, and so we can use the following program:
function z=p42(a)
% a could be complex
A=[1,2,-3;1,3,0;a,-2,-3]; B=(exp(a)+cos(2*a))*eye(3); z=det(A-B);
Trying Newtons method with real initial guesses keeps all iterates real, and gives some real solutions. I
found the following two: 0.8370 and 1.5728. But there are infinitely many complex roots as well. Starting
with a few random complex initial guesses, I found the following (there are infinitely many more):
a = 1.3602 1.0887i, 1.5208 + 1.0333i, 4.6718 + 1.1726i.
Complex conjugates of the above are roots as well.
43. The series theoretically converges. However, direct summation is hopeless. The sum is on the order of
Z
ln(x + 2)
dx 101 (from Maple).
1 + x1.1
0
The numerical sum to 106 terms is 41.5, and the sum to 107 terms is 49.2. How many terms will we need
for the error to be about 0.01? The answer N is given by
X
ln(k + 2)
0.01.
1 + k 1.1
k=N
Approximately,
Z
ln(x + 2)
dx
1 + x1.1
ln(x)
10 + ln N
0.01.
dx = 10
x1.1
N 1/10
10 + p ln 10
= 0.01.
10p/10
An asymptotic approximation can be found as follows. Let N be large (it will become clear how large).
ln(x + 2)
Define f (x) =
. Consider
1 + x1.1
Z
Z k+1
X
f (k + u) du.
f (x) dx =
N
k=N
Since f (x) and its derivatives decay successively faster for large x, we can write
Z
Z k+1
X
f (k) 2
f (k) + f (k)u +
f (x) dx =
u + du,
2
k
N
k=N
whence
k=N
f (k) =
f (x) dx
1 X
1 X
f (k)
f (k) .
2
6
18
k=N
k=N
Applying the above formula for the first sum on the right hand side, we obtain
f (k) =
k=N
f (x) dx
1 X
1 X
f (k) = f (N )
f (k) ,
2
2
k=N
k=N
or
f (k) =
k=N
1
1
1 X
1
f (x) dx + f (N ) +
f (x) dx + f (N ) f (N ) + .
f (k) + =
2
12
2
12
N
k=N
k=0
f (k)
N
1
X
k=0
f (k) +
1
1
f (x) dx + f (N ) f (N ).
2
12
In the above, since N is large, the integral itself is perhaps best evaluated after expanding the integrand
in a series for large x (using Maple). The result, for N = 100, is 92.146. Thus
k=0
f (k)
99
X
k=0
1
1
f (k) + 92.1458 + f (100) f (100) = 9.1793 + 92.1458 + 0.0145 0.00002 101.34.
2
12
44. The following Matlab program can be used, with N larger if you want a more accurate result.
function [p1,p2]=p44
N=1e6; s1=zeros(N,1); s2=s1;
for k=1:N; A=rand(2); s1(k)=max(abs(eig(A)))<1; A=randn(2)*.5+.5;
s2(k)=max(abs(eig(A)))<1; end
p1=sum(s1)/N; p2=sum(s2)/N;
I found p1 0.50 and p2 0.46.
45. This problem just requires repeated ODE solution. Use the follwing Matlab program to define the ODE
system.
function zdot=p45(t,z)
global a b
x=z(1); xdot=z(2);
zdot=[xdot;-a*xdot-6*b*x-x^3];
Then, at the Matlab prompt, try
global a b
op=odeset(reltol,1e-6,abstol,1e-6); s=zeros(1000,1);
for k=1:1000; a=rand; b=rand; [t,y]=ode45(p45,[0,4],[1;0],op); s(k)=abs(y(end,1))<0.2; end
mean(s)
I got 0.415 and 0.395 on 2 consecutive runs. With 10000 trials instead of 1000, I got 0.4112 and 0.4085
on 2 consecutive runs. I suggest 0.41 may be a reasonable estimate of the required probability.
46. The probability density function of X is pX (x) = F (x). The expected value of X is
Z
xp(x) dx 0.8453 (from Maple).
In the above integral (numerically evaluated), it is sufficient to plot the function and choose a finite range
for the integral, because p decays rapidly for large x.
19
Z
Z
f dx1 ,
0
1
f1 dx2 ,
0
etc. The expressions involved become long. The final integrand, a function of x6 alone, has 147 terms.
Writing x in place of x6 , the first few terms are (directly from Maple)
21
259
43
198
78651
137
992
x ln(2) +
ln(1 + 6 x)x3 +
ln(1 + 6 x)x +
ln(1 + 6 x)x2 +
ln(2)
ln(3)
15
4
96
8
5
1600
60
130361
128
16384
15093
128
1
1
ln((3 + 2 x) )
x ln(4+3 x)+
ln(8+3 x)
ln(3)x
ln((1 + 2 x) )x+
3600
15
225
160
15
The integral finally is
49 1058841
12717
844375
12400927
26006
ln(2)
ln(7)
ln(3)
ln(5) +
ln(11)
675
20
6400
100
10368
57600
24137569
4826809
ln(13)+
ln(17) = 2.41104.
518400
518400
The Monte Carlo integral proceeds as follows. We generate N points uniformly distributed on the 6dimensional cube defined by xk (0, 1), k = 1, 2, , 6. The joint probability density function of these
points is some constant p. Requiring
Z 1Z 1Z 1Z 1Z 1Z 1
p dx1 dx2 dx3 dx4 dx5 dx6 = 1,
1
0
1
0
which in turn is just the expected value of f , and may be estimated by averaging the N values of f . Thus
we have the Matlab code:
N=1e6; x=rand(N,6); f=1; for k=1:6 f=f+k*x(:,k); end; f=mean(log(f))
which estimates the integralas (three random attempts) 2.4111, 2.4113, and 2.4115. In general, the error
is inversely proportional to N .
48. Maple evaluates the integral. It is
82707247
49
47045881
19351049
1771561
11573523
ln(3)
ln(7)
+
ln(19)+
ln(2)+
ln(11) .0036398.
2464000
17496000
14400 19595520
21049875
637875
To evaluate the integral using a Monte Carlo method, we can use logical variables as follows. Instead of
evaluating
Z 1 Z x6 Z x5 Z x4 Z x3 Z x2
f (x1 , x2 , x3 , x4 , x4 , x6 ) dx1 dx2 dx3 dx4 dx5 dx6 ,
0
20
we evaluate
Z 1Z
0
1
0
1
0
1
0
1
0
where
g(x1 , x2 , x3 , x4 , x4 , x6 ) = (x1 < x2 )(x2 < x3 )(x3 < x4 )(x4 < x5 )(x5 < x6 ),
where in turn the right hand side is a product of logical variables (each takes the value 1 if the inequality
holds, and 0 otherwise). The following Matlab code can be used (N = 2 107 ):
N=2e7; m=6; x=rand(N,m); f=1; for k=1:m f=f+k*x(:,k); end; f=log(f);
for k=1:m-1 f=f.*(x(:,k)<x(:,k+1)); end; s=std(f), F=mean(f)
It gives
s = 0.098,
F = 0.003633.
The standard deviation is useful in that the random estimate of the integral (here F ) isapproximately
normally distributed with mean equal to its correct value and standard deviation equal to s/ N 2105 ,
consistent with the exact result above.
49. The following Matlab program simulates a series of coin tosses until 7 heads in a row are obtained, and
returns the number of throws needed.
function N=p49
% how many coin tosses to get 7 heads in a row?
% let heads=1 and tails=0, with 50% probability each
% use 400 throws at a time
x=[]; n=400; head7=0;
while 1-head7
x=[x;(rand(n,1)>0.5)];
p=x(1:end-6);
for k=1:6
p=p.*x(1+k:end+k-6);
end
if any(p)
head7=1; [m,N]=max(p); N=N+6;
end
end
Running this many times (30000 times), we get a sample and compute the mean (253.8) and standard
deviation (247.9).
Comment: Those expecting an approximation to the exponential distribution (wherein mean and median
are identical) may note the difference of about 6 between the mean and the standard deviation: I assume
this is because one attempt to get 7 heads in a row requires 7 tosses, which adds 6 to the mean.
50. First write a program which, given a point (x, y), will determine whether it lies inside the region of interest.
Here is such a program:
function z=p50(X)
% to determine whether a given point (x,y) is inside the region of interest
y=X(2); x=X(1); z=1;
if y<0, z=0; end % if below x-axis, then outside
c=polyfit([0,3/2],[0,3*sqrt(3)/2],1);
if y>polyval(c,x), z=0; end % if above first inclined edge, then outside
c=polyfit([3/2,3],[3*sqrt(3)/2,0],1);
21
if
if
if
if
9 3
along the x-axis and its bottom left corner at the origin. The area of the rectangle is
units, and so
2
random points uniformly distributed on this rectangle have the constant joint probability density function
2
p = . The expected value of (R; x, y) is
9 3
ZZ
ZZ
dxdy = p Area of R.
p dxdy = p
813
9 3
= 0.63.
10000
2
The region, with a parabola and two circles drawn in to show excluded points, is shown in figure 13.
2.5
1.5
0.5
0.5
1.5
Comment: Using 300000 points in place of 10000, I got 0.614 (I do not trust the third digit). We could
use a shorter rectangle, e.g., by computing the intersection between the second (right-most) inclined edge
of the triangle and the parabola y = 0.3x2 . I avoided the complication for easier presentation.
Acknowledgements
Sovan Das and Atanu Mohanty provided useful comments. Additionally, some errors were pointed out by Atanu
Mohanty and Saurabh Biswas.
22