LES code

Philosophy

We are happy to share the code with colleagues and we hope that it is useful to their work, we only kindly ask that you acknowledge and cite the work.
The code is written to be as simple, clear, and concise as possible. Thus, on purpose, there are no input checks. The users are supposed to know what they are doing. For instance, if you are using the sixth-order scheme, you need to specify at least 6 ghost cells in the input.

Legacy

In June 2008 we received a version of “UCLA LES” from Prof. Stevens (then at UCLA now at Max Planck Institute for Meteorology). Although the present code still bears similarity to the original UCLA model, almost all code has been rewritten and parts of it restructured. Thus, the present functions and modules are not compatible with legacy code. Even though parts for the model were replaced, the model is the result of many past contributions. The contributions to the UCLA model can be found in the original description (please do not use this as reference to the present model). Dr. Daniel Chung (University of Melbourne) and Dr. Michio Inoue (MathWorks) contributed to the turbulence closures, advection schemes, Poisson solver, and boundary conditions. We used computer code from a stretched-vortex model implementation from Prof. D. Pullin (Caltech).

Download

Please fill out this form and we will email you a copy of the computer code.

Documentation

These instructions are very basic. We are working on detailed documentation. Please contact us if you need any help.

Compile:

The code depends on the FFTW3 and NetCDF libraries. Please edit two lines in the Makefile, which is in the build directory, with the location of the NetCDF and FFTW3 libraries:

NCDF = $(HOME)/local/netcdf
FFTW3 = $(HOME)/local/fftw3

Depending on the compiler you are using, you’ll have to adjust the complier flags, e.g., for gfrotran:

FFLAGS = -O3 -fdefault-real-8 -fdefault-double-8 -ffree-line-length-0

After you specify the location of the libraries, the compiler, and flags:
to build the serial version, do in directory build:

make seq

For the parallel version:

make mpi

If you just type ‘make’ it will build the (default) serial version.
IMPORTANT: If are are compiling both serial and mpi versions, you need to do a ‘make clean’ in between. This is important, because otherwise even though the executable will be build successfully, you’ll get strange error messages when you run.

The executables are called les.serial and les.mpi (for the serial and parallel versions, respectively) and are placed in directories build and bin

Run:

The code will read the input file les.input, which contains all information about the model configuration. The input file should be placed in the current directory. Sample input files are in the applications directory.

You can run by doing something like (e.g., from applications):

../bin/les.serial

or

mpirun -np 16 ../bin/les.mpi

If the code runs fine, you should see something like this in the standard output:

*** Output NetCDF: gabls.0 at t = 0.000000000000000E+000
force::force_casespecific: no case forcing
nstep = 1 t = 0.100000E-01 0.00 dt = 0.1000E-1 cfl = 0.0200 cpu = 0.796
nstep = 2 t = 0.142300E-01 0.00 dt = 0.4230E-2 cfl = 0.0085 cpu = 0.823
nstep = 3 t = 0.226553E-01 0.00 dt = 0.8425E-2 cfl = 0.0169 cpu = 0.824
nstep = 4 t = 0.353309E-01 0.00 dt = 0.1268E-1 cfl = 0.0254 cpu = 0.828

Output:

Basic information about the current time step will be printed in standard output.
The LES will write two files with flow statistics called:
stats.ps.NAME.nc
stats.ts.NAME.nc

where NAME is the user-specified name for the run.

The 3D output is written in files:
NAME.STEP.PROCESSOR.nc

where STEP is the integer time step number and PROCESSOR is the MPI rank. Each MPI rank opens its own file and writes its own data on the disk (for “small” runs < 5000 CPUs this works ok). There are a few ways of “joining” the data into a single dataset for analysis or visualization. I am attaching a Matlab script that will read some variables and join each of them into a single array. Please change the first three lines to the NAME, STEP, and number of MPI Ranks of your files.

 

Test run:

In directory applications, file les.input has the setup for a stably stratified Ekman layer. It is the case of the attached paper (Bear et al. 2006). It is also one of the cases in Matheou & Chung (2014). The LES will simulate an arctic atmospheric boundary layer for 9 hours. The boundary layer is driven by a constant geostrophic wind and constant surface cooling rate. After 8 hours, the boundary layer reaches a quasi-steady state with about 200 m depth. Using a 128 x 128 x 100 grid and 4-m grid resolution, it takes about 8 hours on 16 CPU-cores to complete the simulation.

Some basic input options:
Number of grid points are set by:
nx = 128
ny = 128
nz = 100

Grid spacing is set by:
deltax = 4.
deltay = 4.
deltaz = 4.

SGS model is set by:
sgsmodel = 'spiral'
To change to the Smagorinsky do:
sgsmodel = 'smagorinsky'
to set the Smagorinsky constant add:
cs = 0.2

When
timesteptype = 0
The model will adjust the time step size to maintain a constant CFL number (after the first 100 steps).
The CFL number is set by:
cflmax = 1.2
For all linear advection schemes the CFL stability limit is sqrt(3) = 1.73.
The value 1.2 is recommended to have some safety cushion. For stably stratified flows cflmax = 1.5 usually works ok.