1 s2.0 S0377027304002288 Main

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Journal of Volcanology and Geothermal Research 139 (2005) 1 – 21

www.elsevier.com/locate/jvolgeores

Parallel adaptive numerical simulation of dry avalanches over


natural terrain
A.K. Patraa,*, A.C. Bauera, C.C. Nichitab, E.B. Pitmanb, M.F. Sheridanc, M. Bursikc,
B. Ruppc, A. Webberc, A.J. Stintonc, L.M. Namikawad, C.S. Renschlerd
a
Department of Mechanical and Aerospace Engineering, State University of New York-Buffalo, 605 Furnas Hall, SUNY, Buffalo,
NY 14260, USA
b
Department of Mathematics, University at Buffalo, SUNY, Buffalo, NY 14260, USA
c
Department of Geology, University at Buffalo, SUNY, Buffalo, NY 14260, USA
d
Department of Geography, University at Buffalo, SUNY, Buffalo, NY 14260, USA
Accepted 29 June 2004

Abstract

High-fidelity computational simulation can be an invaluable tool in planning strategies for hazard risk mitigation. The
accuracy and reliability of the predictions are crucial elements of these tools being successful. We present here a new simulation
tool for dry granular avalanches using several new techniques for enhancing numerical solution accuracy.
Highlights of our new methodology are the use of a depth-averaged model of the conservation laws and an adaptive grid
Godunov solver to solve the resulting equations. The software is designed to run on distributed memory supercomputers and
makes use of digital elevation data dynamically, i.e., refine the grid and input data to finer resolutions to better capture flow
features as the flow evolves. Our simulations are validated using quantitative and qualitative comparisons to tabletop
experiments and data from field observations. Our software is freely available and uses only publicly available libraries and
hence can be used on a wide range of hardware and software platforms.
D 2004 Elsevier B.V. All rights reserved.

Keywords: granular flow; Grain flow; flow simulation; GIS; adaptive methods; parallel computing

1. Introduction of lava to explosions accompanied by development of


a stratospheric plume with associated dense, pyro-
Volcanic activity results in a variety of mass flows clastic flows of red-hot ash, rock, and gas that race
ranging from passive gas emission and slow effusion along the surface away from the volcano. Often, these
flows mix with melted snow creating a muddy mix of
* Corresponding author. Tel.: +1 716 645 2593; fax: +1 716 ash, water, and rock. Accompanying seismic activity
645 3875. at a volcano can trigger slope failures generating a
E-mail address: [email protected] (A.K. Patra). giant debris avalanche that can ruin vast areas of
0377-0273/$ - see front matter D 2004 Elsevier B.V. All rights reserved.
doi:10.1016/j.jvolgeores.2004.06.014
2 A.K. Patra et al. / Journal of Volcanology and Geothermal Research 139 (2005) 1–21

productive land, destroy structures, and injure or kill for the accuracy of the computation. These ingredients
the population of entire cities. In these avalanches and allow us to simulate large flows over realistic terrain;
debris flows, particles are typically centimeter- to here, we provide an example of such a simulation at
meter-sized, and the flows, sometimes as fast as little Tahoma Peak (see accompanying paper in this
hundreds of meters per second, range over tens of issue by Sheridan et al., 2004).
kilometers. As these flows slow, the particle mass We begin our discussion with a review of the
sediments out, yielding deposits that can be a hundred physics modeling used in our simulation tool and
meters deep and many kilometers in length. derive the basic governing equations. We follow by
The task of modeling these events is complex and providing details of the solution methodology.
at present only beginning to be understood. Never- Numerical tests are used to verify the code and study
theless, public safety planning needs and scientific its performance under different choices of model and
investigations will benefit greatly from the develop- numerical parameters. Validation using tabletop
ment of tools and designed to answer the simple experiments and simulation of observed flows com-
question: pletes the presentation. It is important to note that as a
modeling tool, the TITAN2D code has several new
If a mass flow were to be initiated at a particular
features that reduce computation and modeling errors,
location, what areas are going to be affected and to
but as with all simulation tools for such complex
what degree by that flow?
physical phenomena, many assumptions are inherent
In this paper, we describe our efforts at developing in the results presented. All results and outputs are
a tool (the TITAN2D simulation code) to satisfy this hence qualified and subject to the validity of the
need. assumptions made. Regardless, we believe that careful
A popular class of models for these events treats use of such tools can provide much valuable insight
them as depth-averaged granular flows governed by and assist in the hazard analysis process.
Couloumb-type interactions (Hutter et al., 1993;
Iverson and Denlinger, 2001; Gray, 1997) with or
without a pore fluid—this is the model of the physics 2. Governing equations
we choose to use in this development. We will use as
a starting point the equations of Iverson and Denlinger 2.1. Models
(2001) in the dry limit. The scale of the flows and
complexity resulting from the modeling of flows over We model the geophysical mass flows on realistic
natural terrain will require large-scale computation terrain, such as depth-averaged granular continuua.
and special techniques to reliably obtain good This approach for describing debris flows was first
numerical simulations of the complex flows—the suggested by Savage and Hutter (1989). The original
focus of this paper. Our numerical algorithm for one-dimensional theory was later generalized to two
solving the governing model equations is an adaptive dimensions by the same authors, by introducing a
grid second-order Godunov solver (Toro, 1997). Local simple curvilinear coordinate system with orthogonal
mesh adaptivity is crucial to reliably resolving flow directions being set by the maximum slope (x-axis),
features and the necessary shock capture. We have the normal to the local surface (z-axis), and a cross-
also developed suitable computational techniques slope axis normal to the other two. However, these
(multiprocessor computing, dynamic load balancing, equations are not frame-invariant and hence unsuit-
etc.) to enable the necessary large-scale simulations able for modeling of flows over general terrain. In
on popularly available cluster computers and more recent work, Iverson and Denlinger (2001) derive
efficient distributed memory multicomputers. depth-averaged, frame-invariant equations for fluid-
An important feature of this work is the incorpo- ized granular masses on three-dimensional terrain.
ration of a direct connection to geographic informa- They also include the effect of interstitial fluid using a
tion system (GIS) databases. Thus, we obtain required simple mixture theory approach. These equations
topographic data dynamically as needed by the form a system of hyperbolic conservation laws,
progress of the simulation at resolutions appropriate referred to as the debris flow equations (DFE). In a
A.K. Patra et al. / Journal of Volcanology and Geothermal Research 139 (2005) 1–21 3

follow-up paper, Denlinger and Iverson (2001) also Written explicitly on components, the above
report on basic numerical solutions to the DFE using a equations are:
first-order Godunov method and an approximate
Riemann solver. Bt s þ vx Bx s þ vy By s  vz ¼ 0 at z ¼ sð x; t Þ ð5Þ
These debris flow equations in the zero pore Bt b þ vx Bx b þ vy By b  vz ¼ 0 at z ¼ bð x; t Þ ð6Þ
pressure limit constitute the starting point of our
work. We solve the equations using a finite volume where u=(v x , v y, v z ) denotes the velocity vector
scheme with a second-order Godunov solver. The and its components.
program runs in parallel, using the message passing Boundary conditions for the stresses are stress-free
interface standard (MPI) to allow communication condition at the free surface and a Coulomb-like
between multiple processors. The algorithm uses a friction law imposed at the interface between the
local adaptive mesh refinement for shock capturing, granular flow and the basal surface:
and dynamic load balancing for the efficient use of the
computational resources. T s ns ¼ 0 at z ¼ sð x; t Þ ð7Þ
We begin by briefly reviewing the derivations of
the model equations. For a detailed description of the ur
T b nb  nb ðnb bT b nb Þ ¼ tanubed
depth-averaged theory for debris flows, we refer the jur j
reader to the reference papers cited above.
ðnb bT b nb Þ at z ¼ bð x; t Þ ð8Þ
2.2. Basic equations and boundary conditions
The superscripts s and b applied to the stress and to
In a fixed Cartesian coordinate system OXYZ, with the components of the normal vector refer to values of
origin O defined so that the plane OXY is approx- the variables which are assumed at the free surface (s)
imately parallel to the basal surface, we write the and at the base of the flow (b). n denotes normals to
conservative form of the equations for an incompres- surfaces with the superscripts s and b referring to the
sible continuum: free surface and the base of the flow. u r =u b+u b is
the velocity vector whose components are equal to the
jbu ¼ 0 ð1Þ
difference between the upper- and lower-side velocity
Bðq0 uÞ þ jbðq0 u  uÞ ¼  jbT þ q0 g ð2Þ values at the boundary layer of infinitesimally small
thickness that forms at the basal contact surface. The
where q 0 is the density of the medium, u is the factor (u r )/|u r | in the tangential component of the shear
velocity field, T is the Cauchy stress tensor, and g is at the surface element of normal n b , indicates that the
the gravitational acceleration. Coulomb friction opposes the avalanche motion.
The granular material is assumed to be an incom- The three-dimensional system gives a detailed
pressible continuum satisfying a Mohr Coulomb law, description of the flow; however, the model requires
which states that slip planes appear inside the bulk as supplementary relations, such as an equation for the
soon as the internal state of stress overpasses the free surface and also a three-dimensional treatment of
Coulomb criteria of failure, r t /r n=tan u int, where r n Coulomb stresses, which add to the complexity of the
and r t are the normal and shear stresses acting on a problem.
plane element inside the granular material, and u int is Under the additional assumption that the flowing
the internal friction angle of the medium. layer is thin compared to its lateral extension, the
Kinematic boundary conditions are imposed at the detailed motion of the mass through depth becomes
free surface interface, of equation F s (x,t)=s(x,t)z=0, relatively unimportant except in a thin layer near the
and at the basal surface interface, with equation bed. The processes in this boundary layer can be
F b (x,t)=b(x,t)z=0: approximated by the Coulomb-type basal sliding law,
Bt Fs þ ðubjÞFs ¼ 0 at Fs ð x; t Þ ¼ 0 ð3Þ and an average of the system of equations over the
depth of the flow provides a simpler model that can be
Bt Fb þ ðubjÞFb ¼ 0 at Fb ð x; t Þ ¼ 0 ð4Þ handled numerically. The depth-averaged model is
4 A.K. Patra et al. / Journal of Volcanology and Geothermal Research 139 (2005) 1–21

appropriate for geophysical mass flows when the which after depth averaging becomes a relation for the
assumption of shallowness holds. The case for this depth-averaged normal stress in the z direction,
assumption has been made by many (see, for example, T̄ zz =qg z h/2. Using the Mohr Coulomb theory, the
Hutter et al., 1993; Iverson and Denlinger, 2001). depth-averaged normal stresses T̄ xx and T̄yy can be
related to the normal stress T̄ zz by using a lateral stress
2.3. Depth-averaged theory coefficient k ap, so that:
T̄xx ¼ T̄yy ¼ kap T̄zz ð14Þ
In this subsection, we give an outline of the depth-
The active or passive state of stress is developed if
averaging procedure used to derive the system of
an element of material is elongated or compressed, and
equations for the dry avalanches model.
the formula for the corresponding states can be derived
We start by integrating the continuity equation in the
from the Mohr diagram. It may be easily shown that:
z direction. Using Leibniz’ formula to interchange the
1=2
differentiation and integration operators, we obtain: 1F½1  cos2 uint ð1 þ tan2 ubed Þ
kap ¼ 2 1 ð15Þ
Bt h þ Bx ðhv̄x Þ þ By ðhv̄y Þ cos2 uint
 ½Bt z þ vx Bx z þ vy By z  vz sb ¼ 0 ð9Þ in which bQ corresponds to an active state (Bv̄ x /
Bx+Bv̄ y /ByN0), respectively, b+Q to be the passive state
where the subscript x, y, z, and t refer to the coordinate (Bv̄ x /Bx+Bv̄ y /Byb0).
axes and time. The notation Bx indicates partial The shear stresses T̄yx and T̄ xy can also be related to
derivative with respect to x. the normal stresses T̄ xx and T̄yy, using a simplification
In Eq. (9), v̄ x and v̄ y are the averaged lateral of the Couloumb (nonlinear) model to assume a
velocities defined as follows: constant proportionally simplification based on a long
Z s Z s history of such a practice in soil mechanics (Rankine,
hv̄x ¼ vx dz ; hv̄y ¼ v y dz ; 1857) and an alignment of the stress axis. The equation
b b
for the lateral shear stresses can be written as:
hð x; t Þ ¼ sð x; t Þ  bð x; t Þ ð10Þ
T̄ yx ¼ T̄ xy ¼  sgnðBv̄x =ByÞ
where s(x, t) and b(x, t) are the free surface and base as
1
defined earlier. Substituting the kinematic boundary kap qgz h sinuint ð16Þ
conditions from Eqs. (5) and (6) in Eq. (9), we obtain 2
the equation for the depth-averaged mass balance: Finally, the formula for the shear stress at the basal
    surface T zx can be derived from the basal sliding law.
Bt h þ Bx hv̄x þ By hv̄y ¼ 0 ð11Þ For curving beds, this relation is:
Integrating the x momentum equation in the normal   
v̄x v̄x
direction and using the Leibnitz formula to interchange Tzx ¼  qffiffiffiffiffiffiffiffiffiffiffiffiffiffi qgz h 1 þ
v̄2x þ v̄2y r x gz
the order of differentiation and integration, we obtain
the depth-averaged x momentum balance equation: tan ubed ð17Þ

q½Bt ðhv̄ x Þ þ Bx ðhv̄2x Þ þ By ðhv̄x v̄y Þ where r x is the radius of local bed curvature, and the
bQ indicates that basal Coulomb stresses oppose
 q½vx ðBt z þ vx Bx z þ vy By z  vz Þ sb basal sliding. Note that the above relationship is
slightly modified from the original in Iverson and
¼  Bx ðhT̄xx Þ  By ðhT̄yx Þ  ½Tzx sb þ qgx h ð12Þ
Denlinger (2001), where sgn(v̄x ) was used instead of
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
v̄x = v̄ x2 þ v̄ 2y . Our observation indicates that in cases
These equations need to be supplemented with
constitutive models. The shallowness assumption where the momentums in the x and y directions differ
gives a bhydrostaticQ equation for the normal stresses significantly (e.g., flow down a channel), this relation-
in the z direction: ship provides the necessary scaling in each coordinate
direction. With this modification, the friction mobilized
Tzz ¼ ðh  zÞqgz ð13Þ is in proportion to the velocity in that direction.
A.K. Patra et al. / Journal of Volcanology and Geothermal Research 139 (2005) 1–21 5

Substituting the kinetic boundary conditions (Eqs. h=0 and can be solved numerically by using
(5) and (6)) and the conditions for the stresses at the standard techniques.
free surface (Eq. (7)), and also using the relations We use an explicit Euler scheme for the differential
between stresses (Eqs. ), (16), and (17) that are equation with right hand side given by the source
derived from the Coulomb theory in Eq. (12), the terms S(U) and a Godunov finite volume solver for
depth-averaged x momentum equation can be written: the remaining system of hyperbolic conservation
  laws. Background information on these methods are
1
Bt ðhv̄x Þ þ Bx hv̄2x þ kap gz h2 þ By ðhv̄x v̄y Þ found for example in Toro (1997) and Hirsch (1990).
2
We are currently investigating higher-order methods
¼ gx h  hkap sgnðBv̄x =By ÞBy ðgz hÞsinuint including adaptive discontinuous Galerkin Methods,

v̄x v̄x and the results will be presented in a future paper.
 qffiffiffiffiffiffiffiffiffiffiffiffiffiffi gz hð1 þ Þ tanubed ð18Þ
2 2 r x gz We use a Cartesian mesh to discretize our domain.
v̄x þ v̄y
The conservative variables (h, hv x , hv y ) are discre-
The relation for the y momentum equation is similar, tized as piecewise constants on each rectangular
and it can be obtained by interchanging x and y in Eq. computational cell, and the equations are approxi-
(18). mated by infinite differences. The evolution of the
flow to the next time step depends on the advective
flux at the cell interface, which results form the wave
3. Solution techniques interaction at the boundaries between cells. The fluxes
are computed by solving the Riemann problem for the
3.1. First-order scheme two constant states at each side of the boundary edge.
For the one-dimensional system U t +F(U)x =S, a
The system of equations governing the flow of dry Godunov scheme gives the explicit formula Un+1 i =
avalanches on arbitrary topography was derived in Uni Dt/Dx[Fi+1/2 n
Fi1/2n
]+S i , where Fi+1/2 n
is the
terms of conservative variables and can be written in intercell numerical flux corresponding to the boundary
vectorial form (overbars have been omitted to simplify between cells i and i+1. The discussion below
the notations): describes the treatment of fluxes in the x direction,
U t þ FðU Þx þ GðU Þy ¼ S ðU Þ ð19Þ and in practice a similar expression has to be derived
for the physical flux G.
where U=(h, hv x , hv y )t , F=(hv x , hv x2+0.5k apg z h 2, We use the HLL (Harten, Lax, van Leer; Toro,
hv x v y )t , G=(hv y, hv x v y, hv y2+0.5k apg z h 2)t , and S=(0, 1997) approximation Riemann problem at the cell
S x , S y )t and where: interface. Other exact or approximate Riemann
 
Bvx solvers may be used [e.g., HLLC (Toro, 1997) and
Sx ¼ gx h  hkap sgn By ðgz hÞsinuint
By Roe (LeVeque, 1992)]; however, we have found there
   to be little difference, provided the computational grid
vx vx is sufficiently fine.
 qffiffiffiffiffiffiffiffiffiffiffiffiffiffi gz h 1 þ tanubed
v2x þ v2y rx gz The first-order HLL solver we implemented uses
  cell-centered values. Characteristic speeds are the
Bvy eigenvalues of the Jacobian matrix of F ffiand are given
pffiffiffiffiffiffiffiffiffiffiffi
Sy ¼ gy h  hkap sgn Bx ðgz hÞsinuint
Bx by (u i +c i , u i , u i c i ), where c i = 2kap hi . We estimate
   the signal velocities in the solution of the Riemann
vy vy problem by the following choice proposed by Davis
 qffiffiffiffiffiffiffiffiffiffiffiffiffiffi gz h 1 þ tanubed ð20Þ
v2 þ v2 ry gz (1998):
x y

The components of the unknown vector U represent l


Ciþ1=2 ¼ minð0; minðuiþ1  ciþ1 ; ui  ci ÞÞ ð21Þ
pile height and two components for the depth-
averaged momentum. The above system of equa- r
tions is strictly hyperbolic away from vacuum state Ciþ1=2 ¼ maxð0; maxðuiþ1 þ ciþ1 ; ui þ ci ÞÞ ð22Þ
6 A.K. Patra et al. / Journal of Volcanology and Geothermal Research 139 (2005) 1–21

and the fluxes at the frontier between cells by: where, in the formula above, Dx U, and Dy U are the
limited slopes for U in the x and y directions,
r
Ciþ1=2 FðU ni Þ  Ciþ1=2
l
FðU niþ1 Þ þ Ciþ1=2
l r
C1þ1=2 ðU niþ1  U ni Þ
Fniþ1=2 ¼ r l
respectively.
Ciþ1=2  Ciþ1=2
In the corrector step, a conservation update of U is
ð24Þ computed as follows:
where F is the physical flux described in Eq. (19). nþ 12 Dt n
U nþ1
i;j ¼ U i;j  ½F  Fni1=2
Flow fronts occur when zero flow depth exists Dx iþ1=2
adjacent to a cell with nonzero flow depth. The Dt h n i
 Giþ1=2  Gni1=2 þ DtSi;j ð27Þ
errors in front propagation speeds can be very large, Dy
therefore separate estimates for speeds are needed in This time, the numerical fluxes are computed using as
this case. For a front moving in the positive x left and right states the values obtained by interpolat-
direction c i +1=h i +1=0, and the correct solution ing the center values to the edge position; that is, for
consists of a single rarefaction wave associated with n
Fi+1/2 l
use Eq. (24) with U i+1/2,j =U i,jn+1/2+(Dx /2)Dx U i,j
n
the left eigenvalue. The wet/dry front corresponds to r n+1/2 n n
and U i+1/2,j =U i+1,j (Dx /2)Dx U i+1,j instead of U i and
the tail of the rarefaction and has exact propagating U i+1n . In the above formulas, Dx and Dy are used again
speed u i+1=u i +2c i . This problem is similar to the to denote the limiting slopes in the x and y directions.
problem involving vacuum states in shock tubes, and
the rationale for this approach is discussed in Toro
(1997). 4. Computational methodology
Front tracking is another way of dealing with wet/
dry fronts; however, it is quite complicated and In this section, we will describe the core
computationally expensive for multidimensional computational methodologies used in the TITAN2D
flows, hence we chose not to implement it in the code.
current version of our code.
4.1. Adaptive methods
3.2. Second-order description
Adaptive methods, wherein the resolution of the
To implement the second-order Godunov method numerical approximation is tailored to the solution,
we follow the Van Leer approach described in have demonstrated their ability to improve the accu-
Davis (1998). To increase spatial accuracy, the racy of numerical simulation without significantly
solution is represented by piecewise linear approx- increasing the computational cost. Berger and Collela
imations, and slope limiting is used to prevent examined solution-adaptive schemes for hyperbolic
unphysical oscillations. To increase time accuracy, a problems in Berger and Collela (1989). Although
second-order explicit predictor corrector scheme is major gains can be obtained by using adaptive
implemented. methods, these gains do come with a price. In order
Eq. (19) can be rewritten: to utilize adaptive methods successfully, we need to
determine (a) where and how to adapt, and (b) how to
U t þ AbBx U þ BbBy U ¼ S ðU Þ ð25Þ
make the process efficient.

where A and B are the Jacobian matrices of F and G, 4.1.1. Error indicators
respectively. The prerequisite to the successful use of adaptive
Given Ui,jn , the (i, j) cell average at time nDt, the methods is knowing where to adapt the computational
midtime predictor step is: grid. Ideally, an error estimate is calculated which
bounds the error in the numerical approximation, and
nþ 1 Dt n Dt Dt n a convergence rate with regard to the mesh parameters
U i;j 2 ¼ U ni;j  A Dx U ni;j  Bni;j Dy U ni;j þ Si;j
2 i;j 2 2 is used to minimize the solution error. For many
ð26Þ nonlinear problems, such as the equations presented
A.K. Patra et al. / Journal of Volcanology and Geothermal Research 139 (2005) 1–21 7

earlier, determining an error estimate and/or conver- local refinement of an element can trigger refinements
gence rates can be difficult and computationally very of neighboring elements. This is shown in Fig. 6.
expensive. In these cases, error indicators are often
used to adapt the mesh. Error indicator shows regions 4.1.3. Unrefinement of a mesh
where there are errors in the numerical approximation While refinement of a mesh is done to maintain
but do not bound the error. Often, the simplest solution quality, unrefinement of a mesh is done to
adaptive strategies involve only examining areas of minimize the amount of computation needed to
large derivatives and/or large fluxes or other regions calculate an approximate solution. As the simulation
of interest. proceeds in a dynamic problem, certain portions of the
In the current version of the TITAN2D tool, we use a mesh that were previously adapted to maintain
relatively simple measure of error for adaptivity. A numerical solution quality no longer need such a high
simple scaled norm of the fluxes around the boundary mesh resolution and can be unrefined. Because
serves as the primary error indicator. We define the unrefinement of a mesh is only done to save
quantity: computation, it is not useful to have computationally
1 intensive procedures to calculate which elements to
EK ¼ l jFj2 ds ð28Þ unrefine. Because of this, the unrefinement scheme is
dk BXK
not as aggressive as the refinement scheme. To reduce
as a measure of the error associated with cell K. A the amount for computation for the unrefinement
fixed fraction refinement strategy, i.e., all cells scheme, three main ideas are used:
whose E K ranks in the top p percent are selected (1) Unrefinement of refined elements: Only ele-
for refinement. While this strategy does a good job ments that were previously refined are allowed to be
of selecting areas with high flow rates for refine- unrefined, and the elements are only unrefined to their
ment, it does not correctly track the flow front parent element. This reduces the complexity of the
where refinement is critical. Thus, we supplement unrefinement procedure and ensures that no new
this criteria by also tracking the change in the local elements need to be created. A drawback of this
flow depth variable with time, i.e., h K (t n ) constraint is that no unrefinement can be performed
h K (t n1). A combination of these criteria appears beyond the original mesh, and if the original mesh is
to provide satisfactory refinement of grids. More too fine, it cannot be adopted to a proper size.
sophisticated strategies are the subject of current (2) Triggered unrefinement: Because of the one-
work. irregularity rule, unrefinement of a group of elements
may trigger unrefinement of other elements. If
4.1.2. Local refinement of mesh unrefinement of a group of elements would trigger
Local refinement (often termed h refinement in the unrefinement of other elements, the unrefinement is
literature on adaptive methods terminology we avoid not performed. This ensures that no elements are
here) can be achieved by either splitting an element up unrefined which should not be unrefined. Calculating
into smaller elements or remeshing the domain. the triggered unrefinement can be a significant
Automatic mesh generation is not as efficient as computational task.
splitting the elements, thus we employ the latter. For
the quadrilateral elements used in our work here, an
irregular mesh as shown in Fig. 1 will result from
splitting elements locally. Such irregular meshes
require complex connectivity information to be
maintained and updated as the grid changes. To keep
this information manageable and the coding complex-
ity manageable, we impose the one-irregularity rule
(Demkowicz et al., 1989). This essentially requires
that an element can have at most two neighbors on an
edge. The result of the one-irregularity rule is that the Fig. 1. An adapted irregular mesh.
8 A.K. Patra et al. / Journal of Volcanology and Geothermal Research 139 (2005) 1–21

(3) Unrefinement of an element on a subdomain urations are (a) generation 0 element has a generation
boundary: Unrefinement of an element which is on 1 neighboring element to the bpositive_ x _ sideQ, and
the subdomain boundary is not allowed. Because of (b) generation 1 element has a generation 0 element
this, unrefinement on one processor will not require neighbor on the bpositive_ x _ sideQ. For the case when
updating neighbor information on another processor. the two elements have the same generation, the basic
This reduces the amount of interprocessor communi- Godunov scheme is used.
cation. While this may seem too restrictive, it is Each cell is defined by its nine nodes. The
alleviated through the use of dynamic load balancing. variables of interest (height and momentum) are
It is assumed that after performing dynamic load stored at the center (node 8). Fluxes are stored at the
balancing, the elements that should have been edge nodes (nodes 4–7). To preserve the conservation
unrefined but were not because they were on the of mass and momentum, flux balance has to be
subdomain boundary will become interior elements imposed at each cell interface. The mathematical
and can be unrefined later on. relations to be satisfied are Fp =0.5 (Fm1+Fm2) for
case (a), and Fm =0.5 (Fp1+Fp2) for a case (b). The
4.1.4. Ghost cells indicators m and p refer to the bminusQ or bplusQ side of
To calculate the finite difference approximations of the interface being investigated, the indices b1Q and b2Q
the derivatives in the governing equations and the denote the sons of the element with higher generation
fluxes between elements, the elements in the simu- number, and F is the numerical fluxes. The above
lation code need solution information from neighbor- relations may be regarded as integral balance of fluxes
ing elements. For serial codes, all elements will be on over appropriate regions of the linear interface.
the same processor and thus accessible to the The second situation when separate treatment of
processor. This is not the case for parallel codes, fluxes has to be done is at the interface between two
because neighboring elements may be located on a processors. For the current element, if the interface is
different processor and not be directly accessible. To on the positive_x_side, only the flux for the current
avoid excess interprocessor communication, a bcopyQ element are calculated, because the neighbors sharing
of the off-processor neighboring elements is stored on the interface are ghost cells. Similarity, if the interface
the processor. These bghost cellsQ only store informa- is on the negative_x_side, only the fluxes on that side
tion, and no simulation calculations are performed on of the current element are being updated.
these cells. Fig. 2 shows the ghost cells for a two
subdomain partition. 4.3. Code integration in a geographic information
system (GIS)
4.2. Adaptive Godunov algorithm description
Because we deal in our applications with real-world
The basic Godunov scheme presented in Section 3 coordinates, we link our new simulation code dynam-
can be adapted with relative ease for use on irregular ically to a geographic information system (GIS) that
meshes (see also Berger and Collela, 1989). The provides geospatial database. Integration of the simu-
calculation of numerical fluxes at the interface lation code with GIS functionality and GIS data layers
between different grids and at the interface between requires acquiring appropriate data sources at a
domains belonging to different processors have to be resolution compatible with the resolution of our
treated separately. Fluxes crossing boundaries have to computational grid. Most digital elevation models
be balanced at each cell interface so that global and (DEMs) available use a pixel or raster-based grid data
local conservation of mass and momentum are format instead of the computational mesh (or lattice in
preserved. GIS terminology) used in our modeling approach. This
The calculation of fluxes is done concurrently on requires that the grid-based elevation data must then be
the positive_ x _ side (see Fig. 1) of the current element extracted in a way to accurately represent the computa-
being updated and on the negative_ x _ side of the tional grid potentially at a different resolution (see Fig.
neighbor element that shares the interface. With the 3). Note that in the case of comparable resolutions, GIS
bone-irregularity ruleQ, the two new possible config- data must be appropriately interpolated to avoid
A.K. Patra et al. / Journal of Volcanology and Geothermal Research 139 (2005) 1–21 9

Fig. 2. Example of ghost cells for a two subdomain partition.

creating mathematical artifacts that do not represent the must be obtained from the GIS system. The other
existing topography in the real world. option of deriving elevation and curvature data by
We decide to use the open-source GIS GRASS interpolating data the from the new computational
(http://www3.baylor.edu/grass/, 2003) that allows a grid’s parent cell degrades the solution quality.
tight coupling between model and GIS code to Integration of geographic information systems (GIS),
prepare the initial DEM model input at a particular such as GRASS (http://www3.baylor.edu/grass/,
resolution, and while the model is running, additional 2003), with our codes must be accomplished such
elevation mesh points are requested by the simulation that our simulations can query the GIS database for
at various resolutions derived from the DEM. Ele- accurate topographic information for the newly
vation and curvature data are interpolated by the introduced grid cell. Towards this goal, we have
model code based on the demands of the dynamically developed a set of interface routines to interactively
created computational grid at various resolutions. obtain elevation, slope, and curvature information
Because the grid is adapted during the simulation, from GRASS databases.
the computational resolution changes. As new grid If the GIS resolution is much finer than the desired
points are added to the simulation, new elevation data computational resolution, then the GIS data can be

Fig. 3. GIS and computational grids at different resolutions. In the case of comparable resolutions, GIS data must be appropriately interpolated
to avoid creating false artifacts.
10 A.K. Patra et al. / Journal of Volcanology and Geothermal Research 139 (2005) 1–21

obtained by a simple look-up process of the GIS grid among the many cumbersome tasks handled by
cell that contains the point for which new data are being AFEAPI. Some of the principal ideas of AFEAPI
requested. However, if the GIS resolution is compara- are using Space Filling Curves (SFC; see below) for
ble or smaller, then this process can lead to artificial the cell ordering, a hash table data structure for
features. Because of this, the GIS data must be accessing cell data, and dynamic load balancing. The
interpolated carefully using interpolation techniques advantages of using the SFC is that there is support
over the computational cell area. In an early imple- for hierarchical adaptive mesh refinement, fast key
mentation, we used the piecewise constant approach to generation, key uniqueness, a global address space,
obtaining elevation data, even for cases in which the and memory locality of data based on physical
GIS resolution was comparable to the computational locality of mesh objects (i.e., the cells). The locality
grid. This resulted in the generation of bfalseQ artifacts properties serve to improve cache performance in
i.e. small zones in which the GIS created artificial steep hierarchical memory systems.
features. Such artifacts can greatly corrupt the solution.
A more careful study of the effect of postprocessing 4.4.2. Partitioning with space filling curves
GIS data is currently underway. The main goals of load balancing are to assign work
evenly to a set of processors and to minimize the
4.4. Parallel adaptive simulations communication between processors. For grid-based
computations, an easy way to obtain such a work
While adaptive methods can increase the accuracy distribution is to partition the mesh and assign work
of a simulation without major increases in computing associated with different pieces to different processors.
power, many physical problems still cannot be If the computational load changes as the computation
accurately simulated on single-processor machines. proceeds, as is the case for h-adaptive methods, the
To increase the available computing power, multiple- load balancing must be performed in conjunction with
processor machines need to be used. In order to get the computation. This is called dynamic load balanc-
the best simulation accuracy, the simulation code must ing. The major constraints of dynamic load balancing
run efficiently on all of these machines, especially are minimizing the time to calculate a division of the
those with distributed memory. Good data manage- work and minimizing the amount of objects that need to
ment and problem decomposition are critical for an be moved between processors. There are currently
adaptive code to run efficiently on such distributed many different load-balancing algorithms and libraries
memory-parallel machines. available (see Hendrickson and Devine, 2000 for a
review). In our work here, a variant of the Space Filling
4.4.1. AFEAPI Curves partitioning algorithm originally introduced by
An extended version of the finite element data Salomon and Warren (1993).
management system, Adaptive Finite Elements Appli-
cation Programmers Interface (AFEAPI; Patra et al., 4.4.3. Space-filling curves
2002; Laszloffy et al., 2000; Long, 2001), was used in Space-filling curves are continuous functions that
this code. Originally, AFEAPI was developed for map a bounded one-dimensional interval R onto a
statistic, hp adaptive finite element simulations of bounded n-dimensional space U n , h n : RYU n . Sagan’s
linear elastostatistics. For this work, AFEAPI was (1994) recent text provides a very readable review of
modified to handle a dynamic, h adaptive finite their salient features. The mapping h n is continuous and
difference simulation to the DFE equations presented surjective but not injective (onto but not one-to-one). In
earlier. AFEAPI greatly simplified the task of manag- finite precision computations though, this mapping
ing adaptivity in a parallel environment. becomes bijective and a point x i a U n actually
Adding, deleting, and modifying mesh objects represents a dsmallT hypercube in the n-dimensional
while maintaining mesh consistency in a parallel space. The size of the hypercube is determined by the
environment, dynamically adjusting mesh partition- recursion level of the SFC and the size of the original n-
ing, and migrating appropriate cell objects to dimensional domain. If the original n-dimensional
maintain load balance during the simulation are domain is mapped into a unit n-dimensional hypercube
A.K. Patra et al. / Journal of Volcanology and Geothermal Research 139 (2005) 1–21 11

Fig. 4. Finite precision SFC passing through the points of a two-


dimensional domain.

as done in Edwards and Browne (1996), then


U n =[0,1]n and the lengths of the sides of the hypercube
will be 2r, where r is the recursion level of the finite
precision SFC. The top half of Fig. 4 shows how the
SFC traverses through a square for the first three levels
of recursion. This is done by calculating the inverse
mappings of points in U n of the SFC, n i =h n1(x i ), x i a
U n and then sorting the points n i aR. If a set of points Fig. 5. SFC partitions of two-dimensional FEM grid that has been h
adapted.
are given in U n , an SFC can be calculated which
traverses through all of them. The result of this property
R can be viewed as a weighted line itself, because the
is the preservation of the spatial locality across the
objects have already been associated with points on
mapping. The bottom half of Fig. 4 shows how the SFC
the line. To calculate cuts along the weighted line,
traverses through points of a nonuniform grid.
most SFC partitioning algorithms perform some type
of k-way cut calculation to find the cuts between
4.4.4. SFC partitioning
processors in Step 5 of Table 1 of the algorithm.
The key idea of the SFC partitioning algorithm is
that the task of partitioning objects in an n-dimen-
sional space is easily accomplished by ordering the
objects using an appropriate index space and then
partitioning the index space. SFCs provide an easy
technique for the ordering of objects which are located
in an n-dimensional space. The typical scheme for
calculating SFC partitions is shown in Table 1 and is
illustrated in Fig. 5.

Table 1
Space filling curve partitioning algorithm
1. Find a representative coordinate x i a U n for each object i.
2. Calculate a bounding box B n in U n such that U n oB n and a
mapping g: B n Y[0,1]n .
3. Calculate the location in R for each object by h n1 o g(x i ).
4. Sort all of the objects according to their location in R.
5. Calculate the location of the cuts in R that will produce the
desired partitioning. Fig. 6. Triggered refinement as a result of the one-irregularity rule.
12 A.K. Patra et al. / Journal of Volcanology and Geothermal Research 139 (2005) 1–21

Fig. 7. Allowed and disallowed refinement at a subdomain interface.

Fig. 8. Experimental setup and example runs. (A) Schematic diagram traced from image showing positions of masonite planes and sandpile
masses. Angle of plane was measured with a digital construction level. (B) Starting of experiment at 44.38 with mass of 425.3 g. Image taken
0.23 s after start of experiment. (C) Final position of sandpile from experiment at 31.88 with mass of 425.11 g. Image taken 3.13 s after start of
experiment. (D) Final position of sandpile from experiment at 44.38 (as B). Image taken 1.26 s after start of experiment. Note shorter time to
final position and more distal final position than in (C). Final sandpile is outlined in yellow for visualization.
A.K. Patra et al. / Journal of Volcanology and Geothermal Research 139 (2005) 1–21 13

4.4.5. Refinement and partitioning released instantaneously on the upper part of this
Recall our description of the bone-irregularityQ rule section either from a smaller hemispherical container
for mesh refinement. This can be quite difficult to or a larger cylindrical container. The tilted section was
implement in a parallel code, because refinement in joined to a second section, which dipped from 18 to 28
one subdomain can trigger refinement on other downstream. The mass of particles released from the
subdomain (see Fig. 6). Because of this, the current hemispherical container was ~43 g, while that
refinement scheme does not allow triggered refine- released from the cylindrical container was ~425 g.
ment in other subdomains. This is shown in Fig. 7. Particles were playground sand grains sieved so that
Thus, in the implementation of the refinement only the 2–2.5u (177–250 Am) fraction was used. The
process, we reject any refinements that will induce particles were dyed blue with clothing dye to aid in
such a pattern of refinement. While this does have an visualization. The basal friction angle for this material
impact on the scheme, in practice the deletrious was tested by a number of methods to lie at 188–298.
effects are negligible, because upon the next reparti-
tioning, such cells move away from the bno-refineQ
zone and are then successfully refined.

5. Verification and validation

We describe here an extensive program of verifi-


cation and validation of the TITAN2D code. Verifi-
cation of complex codes like TITAN2D essentially
implies the testing of the code for consistency. We
have conducted a series of numerical tests to check for
consistent behavior. Principal among these have been:
(a) checks for convergence of the solution as the grid
parameters dx, dyY0; (b) comparison of solutions
from adapted grids to solutions from very fine
nonadaptive grids; and (c) reasonable checks that
expected symmetries, and other physically expected
behavior are observed.
For validation, we will employ a series of compar-
isons of the simulation output to tabletop experiments
and field observations. Our primary laboratory scale
tests were a series of sand flows down a flat inclined
plane. The sand is allowed to flow down and spread.
We describe now the setup and different runs.

5.1. Validation: inclined plane experiments

5.1.1. Experiments
Laboratory experiments were conducted using Fig. 9. Simulated and experimental observations of the front and tail
sand flows released on a masonite plane (Fig. 8A). of a pile of granular material sliding down a flat inclined plane at
In many respects, the experimental setup was similar 38.58. The propagation of the experimental and numerical flows
to that used by Poliquen and Forterre (2002). The matches well with a time offset of 0.3 s added to the numerical flow.
(A) Propagation in the downslope direction as indicated by the
masonite plane measured 190 60 cm and consisted position of flow head and tail. (B) Propagation in the cross-slope
of two parts. The first section was tilted at angles of direction as indicated by the width and the extension and elongation
23.98–44.38 with an adjustable mount. Particles were of the pile as indicated by the difference in head and tail positions.
14 A.K. Patra et al. / Journal of Volcanology and Geothermal Research 139 (2005) 1–21

Fig. 10. Simulated and experimental observations of the width of a pile of granular material sliding down a flat plane at 38.58. Effect of grid
adaptivity is illustrated in the plot. Here, an adaptive grid with three levels of local refinement on a base grid 100 200 uniform cells yields
solutions comparable to a grid with 800 1600 cells, while the nonadaptive grid solutions are further away. Experimental observations are close
to the computed values from either the fine resolution grids or the adaptive grid in the later half of the flow. Early on there is significant
difference between the experimental data and simulations.

The large variance resulted from differences in the test aid in visualization. Video frames were then grabbed
methodology. The basal maximum angle of stability with a digital frame grabber, and the sand propagation
was 368. Both the internal friction angle and the was measured directly from the frames by measuring
internal maximum angle of stability were 37.38. The the lateral spreading, as well as the advance of the
propagation of the sand was measured by videotaping, head and tail of the flowing mass. Because of the
while a horizontal grid was projected onto the plane to difficulty in ascertaining the edge of the flow during

Fig. 11. Shape of the pile after 2.4 s of simulations of flow down an inclined plane starting with a 200 100 grid and three levels of local
refinement on the left and a nonadapted grid on the right. The y-spread is much smaller for the coarse grid.
A.K. Patra et al. / Journal of Volcanology and Geothermal Research 139 (2005) 1–21 15

Fig. 12. Shape of the pile after 2.3 s of the simulation of flow down an inclined plane starting with a 200 100 grid and three levels of local
refinement on the left and a nonadapted grid with a 800 400 grid on the right.

time steps when the material was thinly spread and short time. The sand grains spread laterally as well
because of geometrical distortions, the error in the as downstream so that the mass rapidly attained
measurements of positions of the flow is estimated to teardrop shape. With time, this teardrop shape
range from 1 to ~2.5 cm. elongated and spread laterally, although the tail did
A typical experiment proceeded as follows (Fig. ultimately propagate downstream.
8B–D). The video camera was started and the Once on the lower test section, the particles in the
operator then filled the starting container with sand head began to deposit in a teardrop shape that was
as the base was placed flush to the test plane. The noticeably less elongated in the downstream direction
container was removed with a smooth motion to than the actively propagating mass. The final dispo-
avoid undue disturbance of the particles. The test sition of the mass resembled a conic section with its
mass then began propagation downslope, with the base on the lower test section and its apex at some
head initially moving at a noticeably greater speed height on the upper test section that depended on the
than the tail, which appeared to be stationary for a slope angle of the upper test section. The only

Fig. 13. Simulated flow on Little Tahoma Peak-simulated runout vs. field observations (red outline; see Sheridan et al., 2004).
16 A.K. Patra et al. / Journal of Volcanology and Geothermal Research 139 (2005) 1–21

exception to this geometry was in experiments at flow front varies little with slope angle because it is
angles within the range of the basal friction angle, in controlled by the momentum of the flow as it reaches
which case the mass was arrested on the upper test the base of the first slope section. The final position of
section. the flow tail then controls the runout distance of the
center of mass. Fig. 9A shows that the simulations are
5.1.2. Simulations able to predict the position of the flow tail well over a
The runout distance is one of the most fundamental range of slope inclinations. Fig. 9A and B show
parameters measured for granular natural flows. It is sample comparisons of the flow simulations with the
closely related to the so-called Heim coefficient, the experiments for the case of a ~425 g mass and slopes
ratio of the fall height to the runout distance for the of 38.58. The plots show good quantitative compar-
center of mass of the pile. The final position of the ison of the evolving pile shape, speed, and runout

Fig. 14. (A) Aerial view of the southwest flank of Colima volcano from Saucedo et al. (in review). The lighter areas are deposits of the 1991
pyroclastic flows. (B) Initial position of the mass used in the simulation. (C) The simulated mass propagates downslope. (D) After 1000 time
steps, the simulated mass in approximately at the positions shown by (CW) and (CC) in (A).
A.K. Patra et al. / Journal of Volcanology and Geothermal Research 139 (2005) 1–21 17

distance if an offset of +0.3 s is applied to the our adaptivity formulation and implementation are
numerical results. We hypothesize that this offset is consistent.
necessary because of unmodeled internal rearrange- The testing of the model against these experiment
ment of the packing of the particles that comprise the proved to be more difficult than that against the
pile following withdrawal of the container, friction deposits of natural flows (next subsection). On the
angle measurement errors, and an error in timing of open plane, because of the low gradient in pile height
the beginning of the experiment that must be V0.1 s. near the flow edge, it is critical to carefully define the
We investigate next the effect of grid adaptivity on the flow edge. Flows on the open plane also possess
accuracy of the simulations. Fig. 10 shows that the obvious geometric characteristics that required careful
computed width of the pile in the later half is well definition of the coordinate system before model and
resolved and correlates well with experiments using data matched.
either an adaptive grid (three levels of local refinement
on a base grid of 100 200 cells) or a fine grid 5.2. Validation: tests on real terrain
(800 1600 cells). The convergence of the adapted
grid solution to that from a fine grid indicates that the We include here two sample simulations on Little
code is consistent. The poor correlation early on Tahoma Peak, Mt. Rainier, Washington and Volcan de
indicates that the modeling is inaccurate in that flow Colima, Mexico. Fig. 13 shows the simulation for the
regime. Figs. 11 and 12 show typical results at rockfall avalanche on Little Tahoma Peak of 1963 with
approximately the same time (2.4 s after the start of an overlay showing the extent of the deposit from field
the flow) using a coarse mesh of 200 100 grid, the observations. Additional details of these calculations
same initial coarse mesh and three levels of adaptivity can be found in Sheridan et al. (2004).
and a fine mesh of 800 400 grid points. We noticed Fig. 14 shows calculations on Volcan de Colima,
that all expected symmetries about a midplane are Mexico, for comparison with deposits. There were
indeed observed. Secondly, the solution (especially thousands of rockfalls and numerous block and ash
the difficult to capture spread in the y direction) from flows during the 1991–1999 eruptions of Colima
the adaptive mesh and the fine mesh are quite similar, Volcano, with volumes ranging from a few cubic
while the solution from the coarse mesh is quite meters to 106 m3. All flows followed channels or
different from the fine mesh. This indicates that relative topographic lows, propagating for distances

Fig. 15. Plot shows the load imbalance for each processor when using 64 processors.
18 A.K. Patra et al. / Journal of Volcanology and Geothermal Research 139 (2005) 1–21

6.1. Scalability and load balance

A simple measure of parallel code performance is


the load balance, i.e., the problem decomposition.
Ideally, each process is assigned an identical amount
of work. However, in practice, we deviate from this,
and one or more processors are often very lightly
loaded. For the TITAN2D code, we measured this
load distribution by counting the actual number of cell
updates assigned to each processor i, L(i) as:
Xi  Xavg
LðiÞ ¼ ð29Þ
Xavg
Fig. 16. Plot shows the number of cells updated per second per where X i is the number of cell updates assigned to i,
processor as a function of the number of processors. and X avg is the average. Fig. 15 plots the load
imbalances associated with different processors. We
up to 4 km. For a given flow volume, the observe that the imbalances are extremely small.
TITAN2D model approximated flow path and In the next series of tests to characterize the
runout distances well, but had difficulty resolving performance of the code, we examine its scalability.
cross-slope extent in areas where the natural flow Scalability of a parallel code attempts to measure the
was confined within channel walls that were poorly efficiency with which the algorithm/code is able to use
resolved on the DEM (Rupp et al., 2003). additional processors as the problem size is increased.
In both cases, the simulations provide reasonable On distributed memory-parallel computers, the prob-
comparisons with field observations consisting of lem must be first decomposed into a series of
flow paths and area covered by the deposit. Detailed subproblems, which can be solved concurrently, and
matching of deposit thickness and position, and the results must then be synchronised. The tasks of
observations of flow speed are subjects for future problem decomposition and synchronisation of results
work. are thus additional overheads incurred by the paralle-
lization. These costs also usually increase with the
number of processors. Scalable parallel codes are able
6. Code characterization to control this increase—thus the amount of useful
work done per processor per second remains relatively
In addition to the verification and validation we constant. Fig. 16 shows the number of cells updated per
described in the previous section, we have carried out second per processor of a PC cluster on 4, 8, 16, 32, and
a number of tests to demonstrate the computational 64 processors for a test run simulating flows on little
efficiency of the code. The primary goal of this testing Tahoma peak. As more processors are added, finer
is to demonstrate that our codes are able to use the grids are employed maintaining the ratio of (50 50
parallel computing hardware efficiently, and the grid cells per processors in the initial grid). We note that
complex procedures for implementing parallel adap- the count of cells updated per second per processor
tivity do not impose an undue performance penalty. appears to be stable and does not degrade significantly

Fig. 17. Output of the TITAN2D model after 1400 time steps for granular flows on the southwest flank of Colima volcano, Mexico. Flows
simulate block and ash flows that have occurred at the volcano during the most recent eruptive episode, which began in 1991. The simulated
flows were initiated at the top of the volcano where the growth of unstable domes in the natural case results in initiation of block and ash flows.
The largest of the flows in nature have propagated to the hill on which the simulated flows stop. Small changes in parameter values or initial
conditions can result in significant errors in estimates of propagation distance. (A) Position of the initial pile for (B)–(E); (B) internal friction
angle 308, bed friction angle 208; (C) internal friction angle 308, bed friction angle 258; (D) internal friction angle 308, bed friction angle 208,
mass twice that of (A); (E) internal friction angle 168, bed friction angle 158; and (F) same as (C) but the center of the initial pile has been moved
150 m.
A.K. Patra et al. / Journal of Volcanology and Geothermal Research 139 (2005) 1–21 19
20 A.K. Patra et al. / Journal of Volcanology and Geothermal Research 139 (2005) 1–21

with increasing numbers of processors. Tests on other the resulting computation efficiency) enables us to
simulations of other sites have yielded similar results. run fairly accurate calculations on inexpensive desk-
Our analysis indicates that the slight degradation in top machines. For the best calculations though, we
performance is largely due to the shared access to a need to use high-end distributed memory-parallel
single GIS data set and the lower scalability of the computers.
parallel input–output system available on the commod- Future efforts on the development of the TITAN2D
ity PC cluster-type computer used here. tool will take two directions. The first will be enhance
the quality of the model to enable it to deal with
6.2. Parameter studies fluidized mixtures as opposed to the dry avalanches
modeled now. As described in the parameter studies,
The simulations require us to provide parametric the outputs of the simulations are dependent on
inputs for (a) the bed and internal friction angles, (b) choices of several parameters. Because parameters
location, extent, and height of initial mass, and (c) for field studies from the digital elevation maps to the
terrain topography. initial conditions and estimates of friction angles have
In a recent study, Rupp et al. (2003) studied the a large variability, we will explore systematic
effect of parameter variations on simulations using the approaches to quantify this uncertainty.
TITAN2D code. Fig. 17 shows the results of these
calculations. These results indicate that the outputs of
these calculations are critically dependent on the
Acknowledgements
choice of initial conditions and to a lesser extent
dependent on the surface and the material character-
This work was supported by NSF Grants
ization (the friction angles). Such insights resulting
ITR0121254. Computational resources were provided
from even these preliminary calculations can be
by the Center for Computational Research, University
invaluable in guiding future field studies and hazard at Buffalo.
risk planning.

7. Conclusions and future work References

In this paper, we have described the successful de- Berger, M., Collela, P., 1989. Local adaptive mesh refinement for
shock hydrodynamics. J. Comput. Phys. 82, 64 – 84.
velopment of a new tool for simulation of geophysical Davis, S.F., 1998. Simplified second order Godunov type methods.
mass flows. Such a tool can be used by geoscientists SIAM J. Sci. Statist. Comput. 9, 445 – 473.
and public safety planners seeking to estimate the Demkowicz, L., Oden, J.T., Rachowicz, W., Hardy, O., 1989.
hazard risk from such flows. In associated efforts, we Toward a universal h–p adaptive finite element strategy: Part I.
are also developing visualization and collaboration Constrained approximation and data structure. Comput. Meth-
ods Appl. Mech. Eng. 77, 9 – 112.
tools to make our systems widely accessible. Denlinger, R.P., Iverson, R.M., 2001. Flow of variably
We have attempted to use the state-of-the-art fluidized granular material across three-dimensional terrain:
computational methodologies, including parallel 2. Numerical predictions and experimental tests. J. Geophys.
computing and adaptive schemes, to obtain high- Res. 106, 533 – 566.
Edwards, H.C., Browne, J.C., 1996. Scalable dynamic distributed
quality numerical solutions. We have also integrated
array and its application to a parallel hp adaptive finite element
the simulation tool with geographic information code. Proc. POOMA ’96 Santa Fe, New Mexico, http://www.
systems to obtain accurate topological data and acl.lanl.gov/Pooma96.
convenient access to other geospatial features (e.g., Gray, J.N.M.T., 1997. Granular avalanches on complex topography.
location of roads) that motivates specific simulations. In: Fleck, N.A., Cocks, A.C.F. (Eds.), Proceedings of IUTAM
Our software is publicly available, and we have Symposium on Mechanics of Granular and Porous Materials.
Kluwer Academic Publishers, pp. 275 – 286.
created versions that run efficiently on inexpensive Hendrickson, B., Devine, K., 2000. Dynamic load balancing in
desktop personal computers and also on very high- computational mechanics. Comput. Methods Appl. Mech. Eng.
end supercomputers. The use of grid adaptivity (and 184, 485 – 500.
A.K. Patra et al. / Journal of Volcanology and Geothermal Research 139 (2005) 1–21 21

Hirsch, C., 1990. Numerical Computation of Internal and External Rankine, W.J.M., 1857. On a stability of loose earth. Philos. Trans.
Flows. John Wiley and Sons. R. Soc. Lond. 147, 9 – 27.
http:// www3.baylor.edu/grass/ at Baylor University accessed April, Rupp, B., Bursik, M., Patra, A., Pitman, B., Bauer, A., Nichita, C.,
2003. Saucedo, R., Macias, J., 2003. Simulation of pyroclastic flows
Hutter, K., Siegel, M., Savage, S.B., Nohguchi, Y., 1993. Two of Colima Volacano, Mexico, using the TITAN2D program.
dimensional spreading of a granular avalanche down an inclined European Geophysical Society 2003, Geophysical Research
plane: Part 1. Theory. Acta Mech. 100, 37 – 68. Abstract vol. 5. , pp. 12857.
Iverson, R.M., Denlinger, R.P., 2001. Flow of variably fluidized Sagan, H., 1994. Space Filling Curves. Springer Verlag, Heidelberg.
granular material across three-dimensional terrain: 1. Coulomb Salomon, J., Warren, M., 1993. Parallel Hashed Oct-Trees.
mixture theory. J. Geophys. Res. 106, 537 – 552. Proceedings of Supercomputing ’93, Portland, Oregon, Nov.
Laszloffy, A., Long, J., Patra, A.K., 2000. Simple data management, Saucedo, R., Macias, J., Bursik, M., in review. Pyroclastic flow
scheduling and solution strategies for managing the irregu- deposits of the 1991 eruption of Colima Volcano. Mexico, Bull.
larities in parallel adaptive hp finite element simulations. Volcanology.
Parallel Comput. 26, 1765 – 1788. Savage, S.B., Hutter, K., 1989. The motion of a finite mass of
LeVeque, R.J., 1992. Numerical Methods for Conservation Laws. granular material down a rough incline. J.F.M. 199, 177 – 215.
Birkhauser Verlag. Sheridan, M.F., Stinton, A.J., Patra, A., Pitman, E.B., Bauer,
Long, J., 2001. Integrated Data Management and Dynamic Load A., Nichita, C.C., 2004. Evaluating TITAN2D mass-flow
Balancing for hp and Generalized FEM, PhD dissertation, model using 1963 Little Tahoma Peak avalanches. Mount
Mechanical and Aerospace Engineering Dept., University at Rainier Washington. J. Volcanol. Geotherm. Res. 139, 89–102
Buffalo. (this issue).
Patra, A., Laszloffy, A., Long, J., 2002. Data structures and Toro, E.F., 1997. Riemann Solvers and Numerical Methods for
load balancing for parallel adaptive hp finite element Fluid Dynamics. Springer-Verlag.
methods to appear in. Computers and Mathematics with
Applications.
Poliquen, O., Forterre, Y., 2002. Friction laws for defense granular
flows: application to the motion of a mass down a rough
inclined plane. J. Fluid Mech. 453, 133 – 151.

You might also like