Opened 3 years ago

Last modified 3 years ago

#215 assigned

Build ODB version 1.0.0 (Met Office)

Reported by: Jin Lee Owned by: Tan Le
Priority: major Component: ACCESS model
Keywords: ODB OPS build compile link library APS3 Cc: Jin Lee, Tan Le, Yi Xiao, Peter Steinle, Fabrizio Baordo, Susan Rennie, Zhihong Li

Description

During the OPS32.0.0 build ODB source files are compiled to produce object files. These are then linked to OPS. So OPS build does not make use of ODB libraries. However it's very handy to have an ODB installation which consists of libraries and various commandline scripts.

This ticket describes in detail how to build UK Met Office ODB version 1.0.0.

Change History (15)

comment:1 Changed 3 years ago by Jin Lee

comment:2 Changed 3 years ago by Jin Lee

Status: newaccepted

comment:3 Changed 3 years ago by Peter Steinle

Cc: Zhihong Li added

comment:4 Changed 3 years ago by Jin Lee

Owner: changed from Jin Lee to Tan Le
Status: acceptedassigned

comment:5 in reply to:  4 Changed 3 years ago by Tan Le

Replying to jtl548:

I have compiled and installed the met office version of odb1.0.0 based on the info from Xiao, it is now available with "module load odb/1.0.0", the testing of which will rely opon OPS32 ODBcreate.

===
ODB source
https://access-svn.nci.org.au/svn/odb/trunk/odb/Odb-1.0.0-Source
https://access-svn.nci.org.au/svn/odb/trunk/odb/Odb-1.0.0-Source-meto

can you have a look the difference?

Odbapi source

https://access-svn.nci.org.au/svn/odb/trunk/odb_api/OdbAPI-0.10.3-Source

compiler and mpi:

module use ~access/modules
module del openmpi intel-fc intel-cc perl netcdf
module load intel-cc/15.0.1.133
module load intel-fc/15.0.1.133

module load intel-mpi/5.0.3.048

ODB schema in OPS32

/g/data1/dp9/ycx548/ops/ops32/r175_bom_nci/src/code/ODB/sql
===

  1. My interpretation of the numerous comments from Jin is that we should use the met office version, and based on that I have worked on the original, un-tampered version in

https://access-svn.nci.org.au/svn/odb/branches/local/odb/Odb-1.0.0-Source-meto

as the version in trunk does not compile.

  1. Although I used to compile the older versions of odb with mpicc/mpicxx/mpif90, from the "ticket summary" it seems to suggest icc/icpc/ifort, which I am unsure of if they were suggestions from the MetOffice? or Jin's own interpretation, so the first attempt is to use to non-mpi version of compilers. This can be re-compiled if required by OPS.
  1. Again from ticket summary the suggestion was to use netcdf version 4+, again I can not separate comments from the MetOffice? and that of Jin's, anyway this version is not compatible with static lib. After a few unsuccessful attempts I have decided to use netcdf 3 instead. This issue can be revisited if required.

in summary the options to build are

module use ~access/modules
module unload netcdf intel-cc intel-fc intel-mpi
module del openmpi
module load cmake
#module load netcdf/4.1.3
module load netcdf/3.6.3
module load intel-cc/15.0.1.133
module load intel-fc/15.0.1.133
module load intel-mpi/5.0.3.048

cmake $SOURCE_DIR \

-DCMAKE_BUILD_TYPE=Debug \
-DCMAKE_INSTALL_PREFIX="/projects/access/apps/odb/linux/odb1.0.0" \
-DCMAKE_SKIP_RPATH=ON \
-DBUILD_SHARED_LIBS=OFF \
-DCMAKE_C_COMPILER=/apps/intel-ct/cc-wrapper/icc \
-DCMAKE_C_FLAGS="-g -traceback -DINTEL -DLINUX -lcurl" \
-DCMAKE_C_FLAGS_DEBUG="-O0" \
-DCMAKE_C_FLAGS_RELEASE="-O3 -DNDEBUG" \
-DCMAKE_CXX_COMPILER=/apps/intel-ct/cc-wrapper/icpc \
-DCMAKE_Fortran_COMPILER=/apps/intel-ct/fc-wrapper/ifort \
-DCMAKE_Fortran_FLAGS="-Bdynamic -g -openmp -fpp -convert big_endian -integer-size 32 -real-size 64 -fpe0 -traceback -assume byterecl -assume cc_omp -assume underscore -names lowercase -DLINUX -lcurl" \
-DCMAKE_Fortran_FLAGS_DEBUG="-O0" \
-DCMAKE_Fortran_FLAGS_RELEASE="-O3" \
-DNETCDF_PATH="$NETCDF_BASE" \
-DNETCDF_STATIC=ON \
-DODB_API_PATH="/projects/access/apps/odb/odbapi/0.10.3" \
-DODB_API_TOOLS=OFF $@

comment:6 Changed 3 years ago by Tan Le

Please note odb1.0.0 as available in "module load odb/1.0.0" was an initial attempt to compile and install for testing. It is dependent of OPS32 and ODB-API as a complete package to test in ODBCreate for all obs types, as there is no known independent testing procedures in place.

Once we gain a bit more knowledge of the software and its use in the coming weeks, it will be properly documented and release for general use, at that stage the EWG can decide on signing it off.

comment:7 Changed 3 years ago by Fabrizio Baordo

Keywords: APS3 added

Possible problem with odb/1.0.0


We might have a problem with odb/1.0.0 on raijin.

This is what I have done:

1)module load odb/1.0.0
2)cd /home/548/ffb548/cylc-run/r272_ops_at_bom/work/1

/ops_createbufrdirodb_nci_atms_globalops_x86_64_ifort_opt_mirrorv1/odb (ODB created from rose-stem)

3)simple query on odb:

odbsql -q 'select distinct satellite_identifier from hdr, sat, body, radiance'

4)ERROR I GOT:

/projects/access/apps/odb/linux/odb1.0.0/bin/odbsql: line 608: ODB_BEBINPATH: parameter not set

Note that I got same problem even executing the query (odbsql -q 'select distinct satellite_identifier from hdr, sat, body, radiance' ) over another ODB (created with bufr2odb):

/short/dp9/jtl548/work/au-aa068/2014120318/odb/ECMA.atms

comment:8 Changed 3 years ago by Tan Le

sorry for the glitch. The problem is now fixed, please reload odb/1.0.0 and try again.

comment:9 Changed 3 years ago by Fabrizio Baordo


Possible problem with odb/1.0.0


ODB_BEBINPATH problem is fixed, but now running the same query I got the following error :

/projects/access/apps/odb/linux/odb1.0.0/bin/odbmd5sum: error while loading shared libraries: libimf.so: cannot open shared object file: No such file or directory

/projects/access/apps/odb/linux/odb1.0.0/bin/ioassign: error while loading shared libraries: libimf.so: cannot open shared object file: No such file or directory

Maybe another module is necessary to load?

comment:10 Changed 3 years ago by Tan Le

libimf is in intel, can you also "module load intel"

comment:11 Changed 3 years ago by Fabrizio Baordo

ttl548 fixed the issue:

module odb/1.0.0 + intel-fc/12.1.9.293 are now OK to query ODB through odbsql.

The only remaining issue is dealing with the path length, e.g.:

Executing the odbsql within:

/home/548/ffb548/cylc-run/r272_ops_at_bom/work/1/ops_createbufrdirodb_nci_atms_globalops_x86_64_ifort_opt_mirrorv1/odb/

it does not work, but if you create a soft link

ln -s /home/548/ffb548/cylc-run/r272_ops_at_bom/work/1/ops_createbufrdirodb_nci_atms_globalops_x86_64_ifort_opt_mirrorv1/odb atmsOdb

and you execute the odbsql within your link (e.g. cd atmsOdb), it is OK.

comment:12 Changed 3 years ago by Scott Wales

You can get a module to automatically load a dependency by adding to the modulefile:

if ![ is-loaded intel-fc ] {
    module load intel-fc
}

comment:13 Changed 3 years ago by Tan Le

thanks Scott for the suggestion.

I have now included the test in the module env.

comment:14 Changed 3 years ago by Jin Lee

I remember someone exhorting the practice of loading specific versions, and not relying on deaults when loading modulefiles. So in this case would the "best practice" be,

if ![ is-loaded intel-fc ] {
    module load intel-fc/15.0.1.133
}

since "intel-fc/15.0.1.133" was used to build ODB software?

comment:15 in reply to:  14 Changed 3 years ago by Peter Steinle

Replying to jtl548:

I remember someone exhorting the practice of loading specific versions, and not relying on deaults when loading modulefiles. So in this case would the "best practice" be,

if ![ is-loaded intel-fc ] {
    module load intel-fc/15.0.1.133
}

since "intel-fc/15.0.1.133" was used to build ODB software?

Sounds like a much safer way of ensuring reproducibility.

I do have a concern that this may increase the cost of maintaining the modules. There is only one way I can think of testing this - give it a try. If it it looks like becoming too much of a burden then we can look for another solution

Would really value the chance to discuss with someone experienced with maintaining modules. (and then we can put the summary into this ticket)

Note: See TracTickets for help on using tickets.