This document describes an Antelope Software toolbox for Matlab. Matlab is a product of Mathworks, Inc. Antelope is a product of Boulder Real-Time technologies, Inc. Included in Antelope is the Datascope relational database system. This toolbox follows closely the interfaces to Datascope and Antelope built into other scripting environments such as Perl and TCL/Tk. This toolbox was developed under Solaris 2.6 on Sun Ultra computers, using Matlab version 5.3
The Antelope Toolbox for Matlab is intended to be installed in
In addition to the presence of these component files, two links need to be installed, plus the Antelope toolbox commands need to be made available to Matlab users. The recommended way to meet these conditions is to run the program install_matlab_antelope_links. This will make the correct links, then prompt you to edit $MATLAB/toolbox/local/pathdef.m.
For reference, the links made by install_matlab_antelope_links are from
$MATLAB/toolbox/antelope -> $ANTELOPE/data/matlab/antelope
$MATLAB/help/toolbox/antelope -> $ANTELOPE/data/matlab/antelope/html
The edits required are to modify pathdef.m for your installation (the default location is $MATLAB/toolbox/local/pathdef.m) to have two more entries: one for the toolbox commands (antelope/antelope) and one for the toolbox examples (antelope/examples). Taking a few lines out of a properly modified pathdef.m file:
All toolbox commands are documented with the standard Matlab help utilities. To see a list of available commands, type
or (for the Matlab help window)
or (for an HTML index of the help entries in a web browser)
`For help on individual commands, give the name of the command. For example:
DBOPEN Open a Datascope Database
DBPTR = DBOPEN ( FILENAME, OPENTYPE )
dbopen opens the database specified by the path name
FILENAME, using the permissions given by opentype. A
database pointer with the database index filled in is
returned in DBPTR. The opentype may be either r (for read
only) or r+ (for reading and writing). In the latter
case, the db package will attempt to open tables
read/write, but if permissions are incorrect, will open
[Antelope is a product of Boulder Real Time Technologies, Inc.]
The other versions of the help system also work for individual commands:
For further insight, consult the man pages and manuals provided with the Antelope software and the Matlab software.
Databases are opened with the dbopen command, which takes a filename and a permissions flag. For convenience the Matlab Antelope Toolbox contains a demonstration database from the Joint Seismic Program Center. The schema for this database is CSS3.0. The filename of this database is available through the command
The dbopen command returns a four-element structure called a database pointer:
Under normal conditions the user does not modify these fields directly, with the possible exception of the record field. Two tools are provided to aim the database pointer at specific parts of the database (i.e. set the integers correctly). dblookup is the most general of the two. A shorthand version of dblookup, dblookup_table, is provided for the most common operation, aiming the database pointer at a given table of the database:
Databases may be closed with the dbclose command, which destroys the database pointer:
The following examples show some common database operations. These examples presume you have already opened the database and looked up the origin table, as shown in the steps above. We can subset our database:
Find out how many records we have:
Or ask for several columns of values:
Convert the epoch-times (seconds since 1970) for the hypocentral occurrence time to a standard readable format:
' 5/04/1992 (125) 8:45:10.089'
' 5/12/1992 (133) 18:05:42.600'
' 5/15/1992 (136) 7:05:05.300'
' 5/17/1992 (138) 9:49:19.100'
' 5/17/1992 (138) 9:49:21.689'
' 5/17/1992 (138) 10:15:31.300'
' 5/17/1992 (138) 21:36:00.492'
' 5/19/1992 (140) 14:42:48.813'
' 5/20/1992 (141) 12:20:34.700'
' 5/21/1992 (142) 4:59:57.500'
' 5/21/1992 (142) 5:00:00.399'
' 5/21/1992 (142) 18:05:48.543'
' 5/22/1992 (143) 21:40:36.691'
' 5/25/1992 (146) 2:51:32.311'
' 5/25/1992 (146) 16:55:04.100'
' 5/27/1992 (148) 5:13:38.800'
' 5/27/1992 (148) 5:13:41.635'
Or we can customize the time-conversion format:
>> epoch2str(time,'%A %b %d %I:%M %p %Z')
'Wednesday May 20 12:20 PM UTC'
'Thursday May 21 04:59 AM UTC'
'Thursday May 21 05:00 AM UTC'
'Thursday May 21 06:05 PM UTC'
'Wednesday May 27 05:13 AM UTC'
'Wednesday May 27 05:13 AM UTC'
As an aside, we can go the other way too:
We can pick out the first record in our database view (note the indexing convention!):
Find the iasp91 P-phase travel time in seconds from the hypocenter to Fairbanks:
We can launch a spreadsheet tool (dbe) on our whole database:
and examine the individual tables by clicking on the buttons.
The database contains data for one earthquake. We can get the data for the P wave in one of two ways. First we need to get the correct database pointer:
>> db=dblookup_table(db,'wfdisc');
>> dbt=dblookup_table(db,'arrival');
>> db= dbsubset(db,'arrival.chan == wfdisc.chan');
>> dbt=dblookup_table(db,'assoc');
>>dbt=dblookup_table(db,'origin');
>> db=dbsubset(db,'sta == "CHM" && chan == "BHZ"');
Now we have a couple options for getting data. We can use TRLOAD_CSS to load the database waveform contents into a trace-object, another database pointer that includes information on waveforms loaded into memory; or we can use TRGETWF. The former, which is the preferred method, requires us to call TREXTRACT_DATA to get the actual waveform data:
Or we can go directly to the waveform data from the database pointer:
Response information stored in a database may be loaded into a dbresponse object for evaluation. We precede our demonstration of this with an extraction of the correct filename from the database:
>> db=dblookup_table(db,'sensor');
>> dbinst=dblookup_table(db,'instrument');
>> db.record=dbfind(db,'sta == "CHM" && chan == "BHZ"');
/opt/antelope/4.2u/data/matlab/antelope/examples/demodb/response/sts2_vel_RT72A.1
Now we use this filename to construct a dbresponse object:
Next we use the eval_response command to evaluate the response curve at 5 Hz, noting the conversion to radians/sec:
The returned value is in general complex. Next we evaluate the response for several frequencies at once:
These results are of course amenable to standard Matlab processing:
When we are done with the dbresponse object, we remove it with the free_response command:
Antelope parameter files allow the specification of ASCII-text parameter files. For complete documentation, see the Antelope manuals.
As an example, here's a small text file in my current working directory:
To open this as a parameter file, type the following:
The returned object is called a parameter-file (dbpf) object. This one was actually constructed from the single file shown above. However, the PFPATH environment variable specifies all the locations which may contain parameter files, and all files of the specified name are read. Repeated parameters are overwritten in the order in which they are read, allowing users to override default settings of software packages with subsets of parameter files in their own directories and with correct settings of PFPATH.
To see which existent, readable files will contribute to a dbpf object, use pffiles:
To see all the possibilities that are investigated, regardless of whether they exist or are readable, use the `all' option:
'/opt/antelope/4.2u/data/maps/site/test.pf'
'/opt/antelope/4.2u/data/pf/test.pf'
Now, to see the parameter names in the parameter-file object, use pfkeys:
To convert the entire object to a string, use pf2string:
To extract a single numeric parameter out of the dbpf object, use pfget_num. This actually retrieves the parameter as a string, then applies the Matlab str2num function.
To get string values, use pfget_string:
To get boolean values, use pfget_boolean. This returns -1 (which evaluates to true in an if statement) for affirmative values (`true','yes', etc.) in the parameter file, and 0 for negative values.
Lists of things may be retrieved from the parameter file with pfget_tbl:
Also, the parameter file may contain associative arrays of key--value pairs. Notice that such an entity is really just like a nested parameter file, so these are returned as subsidiary dbpf objects, as shown by this return from the pfget_arr command:
Of course, Matlab has a built-in strategy for dealing with blocks of key-value pairs, namely the structure. Therefore there is a command pf2struct to convert a dbpf object to a Matlab struct. There is a caveat here, however. Matlab structure-field names are limited in length, and are not allowed to contain any strange characters. The underlying parameter-file implementation is much more tolerant. Therefore if you have long names or weird names with dots and hashes in them, pf2struct will fail and you will need to use pfget_string or other appropriate functions on the subsidiary dbpf object.
With reasonable parameter files, however, pf2struct will work fine:
In order to simplify reading complex, nested parameter files, the pfget_arr, pfget_tbl, pf2struct, pfget, and pfresolve commands (the latter are described below) allow a `recursive' option:
The pfget routine is generic, exercising its discretion on what datatype to return. Numbers are always considered string values, which may be converted by the user with Matlab's str2num function:
If a parameter-file entry is specified with the &ask tag, as is the parameter named on_the_fly above, the user will be queried directly. This is based on the Matlab INPUT command, which means that answer may be given using the full-fledged Matlab interpreter:
Repeated calls are dynamically re-queried:
Next, we will look at a more complex example. Real-time operations at the Alaska Earthquake Information Center are managed in part by a parameter-file specifying real-time system setup. This is actually one of several files, helping administrators track the multiple Antelope Seismic Information Systems that are running.
Again, we will use the pffiles command to see the filenames contributing to this dbpf object:
As an interlude to help the reader understand the following demonstration of parameter file commands, here is the aeic_rtsys.pf parameter file itself:
nordic% cat /opt/antelope/4.2u/data/pf/site/aeic_rtsys.pf
site_database /iwrun/op/params/Stations/worm
archive_database /iwrun/op/db/archive/archive
site_database /iwrun/dev/params/Stations/worm
archive_database /iwrun/dev/db/archive/archive
site_database /iwrun/bak/params/Stations/worm
Again, the pfkeys command names the component parameters:
We will take three approaches to answering the question "where is the primary acquisition system currently putting continuous waveform data." The first mechanism of asking this from the parameter file is deliberately long-winded, for instructional purposes:
Now let's speed that up a bit:
>> nestedanswer = pf2struct(setup,'recursive');
In addition to the parameter-file reading interface described above, there is an alternative interface through the pfresolve command. This allows square-brackets in the parameter name to index list (Tbl) entries, and curly braces to index associative-array entries. We will combine these with a nested pfget inquiry to find the name of the primary system:
Note that this setup allows system maintainers to smoothly transition between operational and backup Antelope systems. By switching the primary system from Operation to Backup, operators can preserve continuous, transparent service to user processes while installing new disk drives etc.
About this time, when one gets multiple dbpf objects constructed and needs to keep track of them, it is useful to be able to identify the type of each dbpf object. This is done with the pftype command:
Top-level dbpf objects will be of type PFFILE. Subsidiary arrays are indicated with PFARR. The names under which PFFILE-type dbpf objects were launched may be obtained with the pfname command:
When one is done with a Matlab dbpf object, one can call pffree or clear() on it in order to remove the object. Note, however, that subsidiary parameter-file objects will no longer be useful once the parent is cleared, so it is important to get all the information one wants out of a parameter file object before freeing or clearing it.
Values may also be written to parameter files with the pfput series of command, or with the dbpf command used to compile strings into parameter files. This is explained in the documentation for the individual commands below. If a parameter file is changing from the outside as your Matlab program runs, the pfupdate command may be used to keep up with any changes to the parameter file.
Get 100 seconds of data that occurred 10 minutes ago on the network of stations at Shishaldin volcano:
% Get the name of the current archive database from our local parameter file
primary = pfget(pf,'primary_system');
dbname = pfresolve(pf,['processing_systems{' primary '}{archive_database}']);
db = dblookup_table( db, 'affiliation' );
db = dbsubset(db, ['net == "' net '"']);
dbw = dblookup_table( db, 'wfdisc');
% Get data from 10 minutes ago:
tr = trload_css( db, st, et );
Channel names are not labelled. One station has three components, and another has both a vertical component and a pressure sensor, explaining the repetition of names in this figure.
Many of these presume you have run the command dbexample_get_demodb_path, which sets the variable demodb_path to the name of a sample database. An attempt was made to make each of these examples self-sufficient. Hence there are usually a number of setup commands to make the example call possible. Some of the examples may be a bit contrived. Note that in practice, it is not necessary to keep reopening a database or a parameter-file object! The parameter-file routines use the dbloc2.pf and the rtexec.pf parameter files as examples. They should be available on any properly installed Antelope system. There are also Matlab .m files showing examples of each command in use. These example files should be in $ANTELOPE/data/matlab/antelope/examples on a properly configured system. For a list of available examples, type help antelope/examples.
The arrtimes command calculates the travel-times of all known seismic phases, given the distance delta in degrees to the earthquake and the depth of the earthquake in kilometers. The default travel-time model is IASPEI `91, however this may be modified with the TAUP_TABLE environment variable. The returned travel-time values are in seconds. In this example, we feed the result to strtdelta to produce a more readable result.
Most of the toolbox routines are pretty good about complaining of problems when they occur. However, if you suspect the package is caching a useful error message, this is the way to bring them to the surface.
This is probably one of the more useful commands in the toolbox. It can operate on a database table or on a view that contains only one table (for example, it will work on a view showing a subset of the origin table, but not a view that was made by joining the origin and assoc tables).
>> db = dbopen(demodb_path,'r');
>> db = dblookup_table( db, 'origin' );
>> db=dblookup(db,'','','','dbALL');
The raw storage format of the Datascope files is fixed-format ASCII rows. Usually, interaction with the database tables is smoother if you avoid handling entire rows at once. However, there are occasions where it is useful to move an entire row around. dbadd adds an entire database row to the flat-file table at once. The database pointer for each table contains something called a `scratch' record for that table. The scratch record is an entire row that is in memory for the sole purpose of scribbling. In this example we add several values to the scratch row of the origin table, then write the scratch row to the database (i.e. in this example that means we've written the fixed-format ASCII row to the end of the file /tmp/newdb.origin).
The css3.0 schema, plus several other related schemas, have a separate table for comments. This table is infrequently used. The dbadd_remark and dbget_remark functions encapsulate the operations involved in adding a row to the remark table and linking it to a database row on another table such as the origin table..
Similar to dbadd, dbaddnull puts into a database table an entire fixed-format ASCII row, with format appropriate for that table. In this case all the fields of the new row are set to their null values.
This is one of the most commonly used functions in the Datascope libraries. dbaddv adds a new fixed-format row to the specified table, setting all fields to their null values. It then modifies the specified fields to contain the more interesting values given in each key-value pair. dbaddv checks to make sure none of the primary keys match those for another row of the database, i.e. it takes some steps to keep you from corrupting your database.
This routine closes a database pointer, freeing all the associated resources (It does no harm to the underlying database files).
Removing rows from a database is usually done in two steps. The first is to set all the fields of a row to their null values, but to leave the row in its place. This first step is performed by dbmark. The second stage, accomplished by the dbcrunch command, is to actually remove the null rows from the database table. This two-step procedure prevents skewing of all the record numbers for a table, often useful if the program is still working on the table.
>> db=dbopen('/tmp/newdb','r+');
>> db=dblookup_table(db,'origin');
>> % Add four copies of the same quake, all at slightly different times:
>>db.record=dbaddv(db,'lat',61.5922,'lon',-149.130,'depth',20,'time',str2epoch('now'))
>>db.record=dbaddv(db,'lat',61.5922,'lon',-149.130,'depth',20,'time',str2epoch('now'));
>>db.record=dbaddv(db,'lat',61.5922,'lon',-149.130,'depth',20,'time',str2epoch('now'));
>>db.record=dbaddv(db,'lat',61.5922,'lon',-149.130,'depth',20,'time',str2epoch('now'));
This command immediately deletes a row from a database table.
>> db=dbopen('/tmp/newdb','r+');
>> db=dblookup_table(db,'origin');
>> % Add four copies of the same quake, all at slightly different times:
>> db.record=dbaddv(db,'lat',61.5922,'lon',-149.130,'depth',20,'time',str2epoch('now'))
>>db.record=dbaddv(db,'lat',61.5922,'lon',-149.130,'depth',20,'time',str2epoch('now'));
>>db.record=dbaddv(db,'lat',61.5922,'lon',-149.130,'depth',20,'time',str2epoch('now'));
>>db.record=dbaddv(db,'lat',61.5922,'lon',-149.130,'depth',20,'time',str2epoch('now'));
>> dbquery(db,'dbRECORD_COUNT')
This command is a general-purpose calculator which has access to standard math commands, useful seismological functions such as travel-time calculators, and to all the fields of a database view which is fed to the command.
In the css3.0 schema and related schemas, many times external files are referenced in tables by the two fields dir and dfile. The dbextfile command combines these two fields into a full pathname, resolving all relative pathnames into absolute pathnames as well as adjusting for the actual location of the database table. The dbextfile command requires the name of the base table from which the dir and dfile fields should come. (Note that in many cases, the simpler dbfilename command will suffice instead of dbextfile).
>> db = dbopen( demodb_path,'r' );
>> db = dblookup_table( db, 'wfdisc' );
>> dbt = dblookup_table( db, 'sensor' );
>> dbt = dblookup_table( db,'instrument' );
>> dbextfile( db, 'instrument' )
/usr/local/matlab/toolbox/antelope/examples/demodb/response/sts2_vel_RT72A.1
/usr/local/matlab/toolbox/antelope/examples/demodb/wf/knetc/1992/138/210426/19921382155.15.CHM.BHZ
In the css3.0 schema and related schemas, many times external files are referenced in tables by the two fields dir and dfile. The dbfilename command combines these two fields into a full pathname, resolving all relative pathnames into absolute pathnames as well as adjusting for the actual location of the database table.
>> db = dbopen(demodb_path,'r');
>> dblookup_table(db,'instrument');
/opt/antelope/4.2u/data/matlab/antelope/examples/demodb/response/sts2_vel_RT72A.1
Note that if more than one table with external file references is present in the input view, only the first one will be chosen and returned. This may not always be the intended filename. For cases where the dir and dfile fields appear multiple times in the input view, use dbextfile instead of dbfilename.
This command is a general-purpose utility to hunt through a database table or view for a record matching a specific criterion. Useful features include the ability to skip the first few matches, or to search backwards through the view.
This command frees up the resources allocated when a new view is created. The input database pointer must identify a single table, that is db.table and db.database should be valid. Generally, it is only necessary to explicitly free database views when they are very large or many of them are made within the same program.
As explained for the dbadd command, the underlying storage of database tables is as fixed-format ASCII rows. The dbget command can be used to retrieve an entire database row as a string (in fact, it is much more general, allowing the retrieval of entire tables or just specific fields depending on the value of the database pointer). Rather than trying to parse the output of dbget, use dbgetv to find specific pieces of information in a table or database row.
As explained under dbadd_remark, dbget_remark eases the retrieval of comments in databases with the css3.0 remark table.
The dbgetv command is one of the most frequently used commands in the Antelope programming environment. With dbgetv one can get specific fields out of a database row. A unique characteristic of the Matlab-interface dbgetv command is the ability to extract entire columns at once out of a database table.
The database-pointer is actually a structure of four integers. There is an `invalid' value for all of these which is occasionally useful for tests or as the input to some commands.
dbjoin allows the user to construct composite views in a relational database. Information in each table is cross-referenced according to its primary fields to construct a set of the corresponding, joined rows.
The standard Datascope join operations between database tables are accomplished by inferring the sensible join keys with which to combine the two tables. dbjoin_keys explains which fields were used or will be used to perform a join.
The four-element dbpointer structure, used as a handle to reference different fields or sections of a relational database, is rarely modified by hand. dblookup allows the four elements of the dbpointer structure to be aimed based on human-readable names for the tables and fields. Additionally, several recognized constants such as `dbALL' and `dbSCRATCH' allow further control of the parts of the database to which dblookup aims the database pointer.
One of the most common operations with dblookup is to aim the database pointer at a particular table. dblookup_table is an easier-to-type shorthand for this operation.
This command is the first stage of a two-part process to remove a row from a database table, as explained under dbcrunch. For the impatient, see dbdelete.
>> db=dbopen('/tmp/newdb','r+');
>> db=dblookup_table(db,'origin');
>> % Add four copies of the same quake, all at slightly different times:
>> db.record=dbaddv(db,'lat',61.5922,'lon',-149.130,'depth',20,'time',str2epoch('now'))
>>db.record=dbaddv(db,'lat',61.5922,'lon',-149.130,'depth',20,'time',str2epoch('now'));
>>db.record=dbaddv(db,'lat',61.5922,'lon',-149.130,'depth',20,'time',str2epoch('now'));
>>db.record=dbaddv(db,'lat',61.5922,'lon',-149.130,'depth',20,'time',str2epoch('now'));
In several of the css3.0-style database tables, entries such as hypocentral solutions ("origin" table) or seismic phase arrivals ("arrival" table) are identified with unique, integer id's. The dbnextid command allows the retrieval of the next unused value for any of these integer indices.
Similar to dbjoin, the dbnojoin command returns a view showing rows in the first table that have no counterpart in the second.
The first step in using Datascope on a relational database is to create a `handle', called a database pointer, to the ASCII flat files which store the database contents. This step is performed by dbopen. Here we have written a small routine to reliably provide the pathname of a sample database for these examples.
Many programs require some form of parameter file to store information about run-time configuration. The Antelope parameter-file utility provides a very powerful mechanism to handle such input files, including boolean, string, and numeric values as well as tables or key-value arrays, all of which can be nested. In the Antelope Toolbox for Matlab, interaction with a parameter file is through a `handle' called a dbpf object. See the Antelope documentation for more details on the parameter file mechanism.
>> % Now as a contrived example of the other methods of use,
>> % convert it to a string, then compile it into a new parameter-file object:
>> string_version = pf2string( pf );
>> % Create an empty parameter-file object:
>> % Compile the new string into the empty parameter-file object:
>> % (you can compile into parameter-file objects that aren't empty as well)
Dbprocess provides a simplified interface for forming various views. When a sequence of standard database operations (such as subsets, joins, sorts, etc.) need to be performed all in a row, they can be combined into a single block, passed as a list of statements to dbprocess.
>> db = dbopen( demodb_path,'r' );
>> db = dbprocess( db, { 'dbopen arrival';
Detailed explanations of the valid statements available in dbprocess may be found in the unix man-pages for the dbprocess command. For reference, a summary list is provided here:
dbjoin [-o] table [ key key ..]
dbleftjoin [-o] table [ key key ..]
This function is similar to dbput, however it does not automatically add its own null row. Also, it does not do any consistency checking to make sure the new row makes sense given the contents of the rest of the table. Again, avoid working with entire rows at once unless necessary. Consider using dbputv if possible.
The dbputv command is used to put individual field values into a database row. This is an extremely important command in the Datascope library. Here, we make a new row with the dbaddnull command so we have somewhere to put our values.
The dbquery command is used to request a wide variety of information about a database or one of its component parts. One of the most common uses is to count the number of records in a a table.
The css3.0 schemas and related schemas reference instrument response information in separate files. These response files allow poles-and-zeros format, frequency-amplitude-phase triplet format, FIR format, and more. The dbresponse object is a handle to one of these response files, from which response information can be extracted.
This command takes any database view and returns a view sorted according to the specified expression.
This command, fairly self-explanatory, returns a database view containing only those rows from the input view which match the specified expression.
In the dbjoin command, specified above, the comparison fields ("join keys") used to describe which rows correspond were inferred. The dbtheta command allows you to perform the join with full command over whether or not two rows should be associated together or not, based on the supplied test expression.
Once a view is created, many database operations can be performed on it which winnow out certain rows of each component table. The resulting view may be split into the component rows from each participating table and written to a new database. This is accomplished with the dbunjoin command.
Most time handling in Antelope (not to mention Unix) is done in terms of Unix epoch seconds, or seconds since 1970. The epoch2str command provides a highly flexible method for creating more human-readable time strings from an epoch time.
This function allows a dbresponse object (ultimately, a file of instrument response information stored as poles and zeroes or frequency-amplitude-phase triplets etc.) to be queried for the complex response at certain frequency values.
Once the user is done with a dbresponse object, it must be freed with the free_response command.
This command allows the user to set the beginning time for reading from an Antelope real-time ORB buffer. Note that all the orb examples below require a running orb, for which you have permission to connect.
>> % This presumes that you have connect permission to a running
>> % orb called 'nordic' (you probably don't...)
>> fd = orbopen( 'nordic', 'r' );
>> [result,time, srcname, pktid] = orbget( fd );
>> % Get the next packet with timestamp after the packet we just got:
>> % (note that there's no a-priori requirement that packets arrive on the
This allows the user to close down an open connection to an Antelope ORB.
The orbget command collects the specified packet from an Antelope ORB, unpacks it based on its type, and returns it to the user. Currently the understood types are waveform, parameter-file (you can put an entire parameter file on an ORB), and database-row. Other types of packets are returned as byte vectors. Each packet on an orb has a timestamp and a source-name, which are also returned.
>> % This presumes that you have connect permission to a running
>> % orb called 'nordic' (you probably don't...)
>> % First we'll get a waveform-data object from an orb:
>> fd = orbopen( 'nordic', 'r' );
>> orbreject( fd, '/db/.*|/pf/.*' );
>> [result, time, srcname, pktid, type] = orbget( fd )
The orbopen command allows you to establish a read or write connection to a running Antelope ORB server anywhere on the internet (provided the orbserver maintainers have given you permission to connect to that orb). You may have multiple connections at once to the same ORB.
This command is primarily useful to verify that an ORB connection is up and running. It has the side benefit of telling you the version number of the orbserver.
This is one of the most common orb commands. Evaluated in a tight loop, it allows you to successively receive packet after packet for the streams you've chosen. Each packet can then be processed as necessary.
This command allows the user to reject certain packets from ever coming across a particular orb connection. The specification is by means of regular-expression matching on the source-names of the packets.
>> % This presumes that you have connect permission to a running
>> % orb called 'nordic' (you probably don't...)
>> fd = orbopen( 'nordic', 'r' );
>> % Reject all parameter-file packets, all database-row packets,
>> % and all waveform packets for the Alaska net whos station-names
>> % (return the number of sources still available on the connection)
For a given read connection to an orbserver, the orbseek command allows the user to position the reading point in the stream to a certain packet number or to a specified relative location in the stream (newest packet, oldest packet, next packet, etc.).
Orbselect is a very useful command which allows the user to filter packets from an orb connection so that only those matching certain source-name criteria get through.
This command returns the packet-identification-number for the current packet on read connection to an orbserver.
An entire parameter file (being essentially an ASCII file) or subsection thereof may be converted to a string.
The Matlab Antelope Toolbox allows an entire parameter file to be loaded into a Matlab struct, which is very close to the same idea: a flexible set of key-value pairs. The `recursive' option to pf2struct can be used to read a complex parameter file in all at once into an easy-to-use Matlab-style object.
arrival_info: 'arid sta time iphase deltim fm amp per auth'
azimuth_font: '-Adobe-Helvetica-Bold-O-Normal--*-120-*'
dbpick_channel_options: [1x1 struct]
dbpick_options_order: 'Vertical Horizontal All Selected'
dbpick_revert_to_default: 'yes'
fixedwidth_font: '-Adobe-Courier-Bold-R-Normal--*-120-*'
max_event_time_difference: '25'
ok_residual_color: 'DodgerBlue'
plain_font: '-Adobe-Helvetica-Bold-R-Normal--*-120-*'
site_info: 'staname {lat . "," . lon} gregion(lat,lon)'
The Antelope parameter-file mechanism allows the parameters to be extracted from any of the matching parameter files along an entire search path (specified in the PFPATH environment variable). The pffiles command shows which pathnames actually contributed to a given paramter-file-object's contents. The `all' option shows all the files that were tested for existence and possible contribution.
The pfget command retrieves the specified parameter from the dbpf object into an appropriate format.
This retrieves an associative array from a parameter-file object.
This retrieves a boolean value from a parameter-file object, translating strings such as `yes' or `false' into numeric values.
This command retrieves a table of values from a parameter file.
This command extracts the key names for the key-value pairs in a parameter file.
The pfput command is a very general routine to put strings, numbers, cellarrays (as tables), or structures (as associative arrays) into a parameter-file object.
This is an alternative interface to the parameter-file objects with a naming convention that reflects any nesting in the parameter-file components (tables and hashes). For further detail see the Datascope man pages.
Only top-level parameter-file objects are of file type. Subsidiary key-value structures (arrays) inside a parameter file will have dbpf objects of type PFARR.
This command allows your program to stay current with a parameter file if outside forces are changing it while your program is running.
>> % Create a parameter file and put one value in it
>> unix( 'echo myint 13 > /tmp/myfile.pf' );
>> % Open the parameter file and extract the parameter:
>> pf = dbpf( '/tmp/myfile.pf' );
>> % Now change the parameter file from outside the Matlab context:
>> unix( 'echo myint 25 >! /tmp/myfile.pf' );
>> % a retrieval of the parameter returns the previously cached value:
>> % Updating the parameter-file object refreshes the cached values:
>> [pf, modified] = pfupdate( pf )
This utility is a very powerful and flexible parsing utility to turn a human-readable time string into an epoch time.
This utility turns the input number of seconds into a reasonably-formatted value for a time interval.
This function is the same as the strtime function but includes as well the day number of the year.
This is a safe macro to reliably construct an endtime from a starting time, sample rate, and number of samples. All other useful permutations of this routine exist as well (below).
This function applies the calibration constant to waveform data contained in the trace object.
This routine frees all resources associated with a trace-object.
A trace-object is just a database pointer, pointing to an open database in the Trace4.0 schema. The "trace" table of this database has a field called "data", which contains the address of some waveform data in memory. The trextract_data command gets this address and loads the data contained into a Matlab vector.
>> db = dbopen(demodb_path,'r');
>> db=dblookup_table(db,'wfdisc');
>> [time,endtime,nsamp,samprate]=dbgetv(db,'time','endtime','nsamp','samprate');
>> tr = trload_css(db,time,endtime);
data 2150x1 17200 double array
Trfree is a way to free resources for part of a trace-object structure. Consider using trdestroy unless you know what you're doing.
This command is one of several methods to extract waveform data from a database.
>> db = dbopen(demodb_path,'r');
>> db=dblookup_table(db,'wfdisc');
>> [data,nsamp,t0,t1]=trgetwf(db);
data 2527x1 20216 double array
This is the converse of the trextract_data command, described above.
This command loads the specified time range of data from a database pointer into a trace object (which, of course, is also a database pointer but in a special schema designed for the the handling of waveform data in memory).
This is a deprecated interface for putting waveform data into a database. Please switch to trsave_wf.
>> db=dbopen('/tmp/newdb','r+');
>> db = dblookup_table( db, 'wfdisc' );
>> % Construct a fake waveform:
>> data = data * 32 * pi / 1000;
>> % Construct some variables describing the waveform:
>> time = str2epoch('5/12/97 13:57:18.143');
>> endtime = tr_endtime( time, samprate, nsamp );
>> % Enter the description of the waveform data into the wfdisc table:
>> db.record = dbaddv(db,'sta',sta,'chan',chan, 'nsamp', nsamp, ...
'samprate', samprate, 'time', time, 'endtime',endtime, ...
'foff',foff, 'datatype',datatype, 'dir',dir,'dfile', dfile);
>> % Now put the actual data samples into the file, in the specified format:
>> % As a test, get the data back out:
>> [newdata, nsamp, t0, t1] = trgetwf( db, time-1, endtime+1 );
This command is the generic interface to put waveform database from a trace object into a database. For details see the Datascope man page on trsave_wf.
>> tr=dblookup_table(tr,'trace');
>> % Construct a fake waveform:
>> endtime=tr_endtime(time,samprate,nsamp);
>> % Put the waveform into the trace-object:
>> tr.record=dbaddv( tr, 'net', 'AK', 'sta', 'SINE', 'chan', 'BHZ', 'nsamp', ...
>> 'samprate', samprate, 'time', time, 'endtime', endtime );
>> % Save the trace data in a new database, with the underlying file in miniseed format:
>> db=dbopen('/tmp/newdb','r+');
This routine attempts to splice together as many data segments as possible that are contained in the input trace object.
>> db = dbopen(demodb_path,'r');
>> db=dblookup_table(db,'wfdisc');
>> db=dbsubset(db,'sta == "CHM" && chan == "BHZ"');
>> [time,endtime,samprate,nsamp]=dbgetv(db,'time','endtime','samprate','nsamp')
>> tr=trload_css(db,time,time+10);
>> tr=trload_css(db,time+10,time+20,tr);
>> dbquery(tr,'dbRECORD_COUNT')
>> strtime(dbgetv(tr,'endtime'))
>> dbquery(tr,'dbRECORD_COUNT')
The Matlab Antelope toolbox differs in several aspects from the Antelope language interfaces in C, Tcl, Fortran, and Perl. First, the natural mode of operation in Matlab is to work on entire arrays at once. Therefore, where possible, Antelope database commands have been expanded to read in entire matrices of results when appropriate (e.g. dbgetv), or to act on entire matrices at once (e.g. epoch2str). Similarly, where naming conventions permit, parameter-files may be loaded wholesale into Matlab structures with pf2struct, and database tables may be loaded into structures with db2struct.
Special options to Antelope commands are usually specified with string input arguments, such as `backwards' for dbfind. In many cases the order of placement of these options is important--see the help pages on each command for details.
The most general interface to Antelope, the C language interface, allows temporary views to be given user-specified names. This feature is not supported in the current release of the Matlab toolbox.
This toolbox was developed on Sun-solaris 2.6, Matlab version 5.3. It has not been tested with other platforms and versions.
Several aspects of this current beta release must be treated with caution. First, the database-pointer DBPTR and trace-pointer TRPTR objects refer to databases and memory open by the underlying Antelope libraries. Freeing these objects with the Matlab clear command does not appropriately close the underlying databases, nor free the corresponding memory. These objects must be removed from the Matlab workspace with the DBCLOSE and TRDESTROY commands provided. Note that the DBPTR and TRPTR structures are not objects in the Matlab sense--the word is being used loosely here, paralleling the Datascope documentation (for the trace objects). Conceptually they are very similar to objects, though the user can see and manipulate the private variables directly (useful, for example, to loop over the DBPTR.record field), and also there is no Matlab class tag. These items have been kept as Matlab structures rather than Objects to preserve similarity between the Matlab Antelope toolbox and other programming interfaces for Antelope.
The parameter-file objects actually are Matlab objects. Again, though, they must be cleared carefully. The clear function is overloaded for the DBPF class of Objects, however at least in Matlab 5.2 the command/function duality is broken by the CLEAR command, and apparently the generic CLEAR command is not smart enough to call the overloaded methods for DBPF objects. One must specifically call the CLEAR function (i.e. use parentheses around the argument), or the equivalent PFFREE function, on the DBPF object. Also, the pf routines may at times return derivative DBPF objects, representing complex entries in the parent parameter-file (DBPF object). These first of all are not allowed to be cleared by the PFFREE command. Second, they lose meaning but unfortunately stay resident when the parent DBPF object (the one returned when the whole parameter file was read in) is cleared. Acting on them after destroying the originating DBPF may produce unpredictable results.
There are a lot more trace-library commands available, many of which have not yet been implemented.
The response-file objects are also actually Matlab objects. Just as with the DBPF objects, these need to be explicitly cleared with the overloaded clear functions, CLEAR(DBRESPONSE) or equivalently FREE_RESPONSE.
While the doc command works for Antelope Toolbox commands, the Matlab helpdesk search engine does not yet recognize them. In order to search for Antelope Toolbox commands, use the matlab lookfor command or the search window in the Matlab helpwin help window.
Unlike the other Antelope language interfaces, the Matlab dbeval is able to return entire columns of values if the input database pointer refers to more than one row. As a standard feature, dbeval can return values that are aggregate expressions over the whole table such as max(). If more than one row is passed to such an aggregate expression in Matlab, the aggregate expression will be recalculated for each row, which redundancy can cause huge performance drops for a large database. Therefore, unless otherwise necessary, the user should avoid passing multiple rows to dbeval when using aggregate expressions. See the Unix dbex_eval(3) man pages for the list of aggregate functions in the Antelope expressions calculator.
The Matlab interface to the orb routines is fairly new, and though tested, the Matlab toolbox routines have not been extensively used in implementation. There is the possibility of some change if initial experience shows any inconveniences.
The Antelope Software system, including the Datascope relational-database management
system, is a product of Boulder Real-Time Technologies, Inc., http://brtt.com/
The Antelope Toolbox for Matlab was written by
University of Alaska, Fairbanks
This development has been unfunded, therefore conducted almost entirely in my spare time, which means that I cannot offer official support for it. However, I am nevertheless very interested in feedback and bug reports.
This project would of course have gone nowhere without the underlying Antelope and Datascope software package provided by Boulder Real Time Technologies, Inc. The author would like to thank Danny Harvey for initial encouragement, and Dan Quinlan for extensive and valuable technical consultation and support in this work. Frank Vernon and his research group provided not only complete encouragement and positive feedback early on, but also significant help in beta-testing and debugging a production release. Beta testing was kindly provided by the research group of Gary Pavlis (including Scott Neal and Christian Poppeliers) at the University of Indiana, and by Geoff Abers at the University of Kansas. Local tolerance of experimental versions has been patiently extended to the author by University of Alaska Matlab users, notably Guy Tytgat.