Frequently asked questions

Answers to general and commonly asked questions are below. Use the search field in the upper right to find results from all of our support documentation.

General questions

What kinds of data are in Movebank?

Movebank's database supports the storage of data describing locations of individual animals at known times, and other animal-borne sensor data, along with information about related animals, tags, and deployments.

Read more

How is Movebank funded?

Movebank has long-term funding through the Max Planck Society and the University of Konstanz. Read more about past and current funding sources.

Are my data safe in Movebank?

Movebank is successfully used by government agencies, conservation organizations and universities around the world to securely store and share data.

Sharing and terms of use Movebank's permissions options allow data owners to share data publicly or restrict access to other registered users. These permissions settings also apply to our support and curation team. Anyone with permission to download data must agree to Movebank's user agreement and general terms of use, and data owners can further require that collaborators first accept their own custom terms of use. These options mean that we cannot guarantee long-term accessibility of non-public data: we offer guidance for data owners to ensure that their data remain available in the future.

Internet security protocols Data in Movebank are stored on secure servers in Germany (and not "in the cloud"). All sending of data and communications on the site are encrypted using an SSL/TLS protocol (as indicated by the "https" at the beginning of site URLs).

Who controls the data in Movebank?

Data owners control access to, and are responsible for the quality of, data in Movebank. Movebank is a tool to help researchers archive, manage, and share their data by setting permissions and terms of use customized to their project goals.

Can I download all the data in Movebank?

Data owners who use Movebank can choose when and with whom to share data at the level of each project or study. For studies that do not allow public download, users can arrange data sharing with the owner. For large-scale projects, we recommend reading our tips for collaborations and REST API documentation.

Can I use Movebank to fulfill data-sharing requirements?

Many research funding agencies and academic journals now have rigorous data-sharing policies requiring scientists to make their data available to other researchers. For example, the U.S. National Science Foundation now requires that research funding proposals include a description of how research data will be disseminated, shared, and preserved (see here). A major goal of Movebank is to provide an efficient way for scientists to comply with these types of policies, and has developed the Movebank Data Repository to formally publish and archive datasets (providing a DOI for the dataset associated with a published article). Feel free to contact us at support@movebank.org for help in preparing a data management plan or fulfilling specific requirements.

Is there an offline version of Movebank?

We do not offer a downloadable copy of Movebank. The goals of the Movebank project are to promote data sharing, archiving and collaboration by helping researchers to translate their datasets to a common format and make them available to other researchers and the public. Thus supporting an offline version of the database is outside the scope of the project, and is also unlikely to meet the full data management needs of individual research groups.

However, we do understand that you can't always be online, and that you will want to run analyses on data outside of Movebank. We offer several features to support offline use of data that are in Movebank:

  • You can always download a copy of your data from Movebank as a .csv, Excel, or Google Earth file or as an ESRI shapefile. This allows you to access an offline version of your data in almost any desktop program.
  • If new attributes are added to your dataset as a result of offline analysis (for example migration stage or habitat), you can upload these new attributes to your existing dataset in Movebank.
  • There are a growing number of data analysis programs that work with Movebank-format data but that can be used without an internet connection. For example, the R package move and the Acceleration Viewer can access your data directly from Movebank if you have an internet connection, but work offline as well if you have the data stored.

To develop your own customized database that could be be made compatible with the Movebank format, we recommend the following guide:

Urbano F and Cagnacci F (2014) Spatial database for GPS wildlife tracking data: A practical guide to creating a data management system with PostgreSQL/PostGIS and R. Springer, 257 p.

How can I get search results as a table?

You can search and browse studies in Movebank on the Tracking Map and Studies page. This allows you to identify and learn more about individual studies, but there is currently no option to export search results. However, a table of results can be useful for identifying large numbers of studies that meet certain criteria. You can get a table of study information using the REST API by using the request to "get a list of studies". Results will be based the Permissions of that account to view study summaries.

What data are stored

What sensor data can Movebank handle?

See "Sensor Type" in the Movebank Attribute Dictionary for a current list of sensor types in the database. Some sensors, such as temperature and wet/dry sensors, are also supported and have attributes in the dictionary, but do not have their own sensor type value because they are typically reported and stored with other supported sensor types (i.e. in the same data rows).

Does Movebank stream live data?

You can set up live data feeds to currently deployed Argos PTTs and GSM or Iridium GPS tags from many data providers. Once a feed is started, your data will be automatically added to your study in Movebank several times a day, and you can register to receive regular email notifications with updates.

How does Movebank handle deployment periods and redeployments?

Movebank is designed to manage deployment periods and redeployments. Deployment information is managed separately from location and sensor measurements, so that you can add new deployments or update deployment start and end times without needing to change individual event records. As long as data are associated with the correct Tag ID and deployment information is provided, changes to deployment dates will automatically update which event records are associated with each animal.

Read more

Does Movebank support radio and acoustic telemetry data?

Movebank is designed for relocations of individual animals, including those collected by systems of radio and telemetry arrays. However our data model and attributes do not place importance on the receiver, and we do not include a system for communication between owners of receivers and transmitters, which is critical to collecting telemetry data for long-distance migrants. Therefore we recommend using Movebank to store or publish telemetry data that have already been processed and resemble a traditional tracking dataset. Several other programs specialize in acoustic or telemetry telemetry data, including

We partner with some of these groups and are working on improving support for processed telemetry data in Movebank.

Can I create my own variables in the database?

All data are imported to terms described in the Movebank Attribute Dictionary. While it's not possible to create your own variables on the fly, it is typically possible to import everything you need:

Data import & troubleshooting

What can I do if I am getting error messages or my changes won't save?

Error messages can be caused by several factors, including file formatting issues, internet connection problems, dataset size or our server. In some cases, problems are caused by information your web browser has stored in its cache, and you can solve them by bypassing or clearing your cache. (Similar problems can occur on other websites that allow users to make significant changes to site content, such as Wikipedia, where detailed instructions for solving cache problems are available.) Contact us about error messages by sending a description of how to recreate the problem and the text of the error message to support@movebank.org.

Why don't my tracks appear on the Tracking Data Map?

If you are logged in and have permission to view tracks but don't see them on the Tracking Data Map, it could be that the event records are linked to Tag IDs but not Animal IDs. Go to Download > Download reference data from the Studies page to see the current deployment information in the study. You can add or change deployment information and other reference data in the Deployment Manager (easiest for making a few changes) or by uploading a reference data file (easiest for making many changes).

Read more

What is the effect of Argos' Kalman filter-based algorithm on the Douglas Argos Filter?

Argos offers two algorithms for calculating position estimates using the Doppler effect: one is based on least squares analysis and the other, introduced in 2011, is based on Kalman filtering. Whereas data processing using the least squares analysis leads to two possible locations at each time point, locations derived from Kalman filtering have only one location solution. However, the DIAG format data provided by Argos are in the same format regardless of processing method. For data processed with the Kalman filter, the lat1/lon1 and lat2/long2 attributes are provided, but the two locations are usually identical. Exceptions occur where the Kalman method fails, in which case the data are processed using the least-squares method, and two alternate locations are provided.

The Douglas Argos Filter implemented in Movebank can process Argos data derived from either filtering method. Because data derived from Kalman filter-based algorithm will commonly be better quality, the Douglas filter will often remove fewer points.

Read more

Why do I get this message: the data does not contain the necessary Argos attributes?

You will receive this message when running Argos data filters if your data do not contain the following attributes:

Here are details and guidance for what to do if you are missing the original values:

  • Primary and alternate Argos locations (Argos lat1, Argos lon1, Argos lat2 and Argos lon2): Movebank will find records where the Argos' alternate location (2) is actually the more likely one (~3% of the time) and in those cases use location 2 as the better estimate. Workaround: If you don't have both sets of locations, in Argos Location, choose "Import raw, alternate Argos locations" and then map the latitude column to both lat1 and lat2 and the longitude column to both lon1 and lon2.

  • Argos IQ: used by the Best of Day filter. Workaround: Dave Douglas recommends assigning "11" to every record. You can add this during import by selecting Map column > Argos IQ, checking the box next to Set fixed Argos IQ for all rows, and entering "11".

  • Argos nb mes: used by the Best of Day filter. Workaround: Again, assign a dummy value to every record as described above. Values below 2 will cause locations to be flagged as outliers, so “2” might be a good option; should make it pretty clear it's not a real value.

  • Argos LC: used by both Argos filters. The LC attribute is essential to both Argos filters and without it there is not much point to using them. You could consider using the general purpose filters instead.

Consider how missing values could affect the results. If you want to use the Best of Day filter, but don't have the original IQ values, use rankmeth = 3, which ignores IQ. If you don't have nb mes, you might want to avoid the Best of Day filter altogether.

Why do my timestamps say "00:00.0"?

If you download a .csv file from Movebank and open it in Excel, you'll see the "timestamp" column header, but Excel displays each value as "00:00.0". To get Excel to display the format correctly,

  • Select the column.
  • Select Format > Cells.
  • Select Custom from the Category list.
  • Enter "yyyy-mm-dd hh:mm:ss.000" into the Type field.
  • Click Ok.

If you save the file in Excel before correcting the display, the timestamps will be corrupted. To avoid this problem you can download data in Excel format in you intend to use Excel to view or manage your data.

Why is my taxon invalid?

Movebank uses the Integrated Taxonomic Information System (ITIS). Sometimes ITIS doesn't include subspecies or other names that are adopted by other taxonomies. We recommend using the lowest level taxonomic rank available in ITIS for the species of your animal, and you can add the full taxon name using the attribute taxon detail. If you find a taxon at ITIS but not at Movebank, please email support@movebank.org.

How do I import very large csv files?

It is possible to import csv files as large as at least 2 GB to Movebank. The import interface and upload might proceed slowly and sometimes encounter errors, for example because of browser timeout when reading the file or due to format inconsistencies in the file that cause the import to fail. We continue to adopt and pursue performance improvements, but remain constrained by the capabilities of internet browsers and connection speeds. Here are a few tips:

  • Review the file in R or other software ahead of time to look for any inconsistencies or format issues that can cause errors. With large files the error message may not contain useful information to help identify the format problem.
  • To map data columns, create a saved file format using a small subset of the data, such as the top 100 rows, and then save the format in your study. You can delete this file if you want, and retain the format to use with larger files. This will eliminate the need to map each column of the large file and you will be able to immediately finish the import once it loads.
  • If you receive an error when you attempt to upload or import a large file, wait for an hour or two and return to your study to see if the file has saved or imported. The browser may have timed out but the upload/import often completes successfully.

How should I import large numbers of files to a study?

If your data are stored in many files in the same format, consider merging them prior to import:

  • See this example (see section 9.2.6) for merging files in R.
  • To merge files of text strings (for example many Argos DIAG files) in Terminal: type "cat ", drag/drop all the files onto the Terminal window, add the filepage and name, e.g. " > /Users/me/Desktop/filename.txt" to the end, and hit enter.

If you import many files, here are some tips:

  • Use supported or saved file formats as much as possible to reduce the need to map each file. Be extremely careful with mapping timestamp formats, as they tend to vary across large numbers of files if they have been modified from the originals.
  • Name the files consistently to keep track of what time periods or study sites are included in each file. In particular for e-obs logger.bin files, we recommend creating a unique name for each file prior to import, for example "logger_20191221.bin", to identify the date that data were accessed from the base station (including a location or base station identifier if needed). With proper naming, your files can be sorted in chronological or another useful order in the Studies Page and you can more easily ensure all data have been imported and identify and fix any import errors.
  • If files contain overlapping data, consider how duplicate records are treated during import. For supported standard formats—such as data feeds, Argos DIAG files, e-obs logger.bin files—records that are a complete duplicate of an existing event in your study will typically be ignored during import by default. File Statistics shown on the Studies Page will represent only new events added when the file was imported, and therefore might show that fewer records were imported than are contained in the original file.

Is it possible to import data without going through the import interface?

Movebank's import interface is designed to support flexible and accurate import of data from a wide variety of source formats. Where available, live data feeds and some supported formats allow users to import data without needing to define mappings, formats, or timestamp and unit conversions because we have already specified these with the data providers. For these known formats, it is also possible for advanced users to import data to your study using curl scripts. Contact support@movebank.org for details.

How do I split or merge studies?

There is no automated way to combine or split Movebank studies. Studies in Movebank are treated independently by the database, meaning that data owners are free to split or merge studies as needed for their research, but that fully automated procedures would have a high risk of creating unintended results. Follow these general steps to combine or separate studies:

  1. Download backups of what is currently in each relevant study just in case you need to refer back to it later.

  2. Prepare deployment information: download the reference data from all relevant studies and create a new reference data file for each new study, containing only the deployments that should be part of each new study. See our minimum suggested set of reference data attributes.

  3. Get the correct event data into each study.
    Adding data: The easiest way to add data will depend on how your original study was created:

    • If you have non-Argos data feeds, set up a feed to the tags in the new study or studies, and all existing data for those tags will import automatically.
    • If data came from a small number of original files or an Argos feed, you can download these original data and reimport them to the new study or studies.
    • If data came from a large number of files; combine data from multiple files, formats and feeds; or have been manually filtered to flag outliers, it might be easiest to download the data in Movebank format for each study, remove data for unwanted tags and animals, and import the file as custom tabular data.

    Deleting data: To remove data from an existing study, or unwanted data added to a new study using original data files, you'll want to delete all unwanted tags and animals. This is most easily done using the batch edit option in the Deployment Manager. Deleting tags will also remove associated event data.

  4. Import the reference data from step 2 to the new study or studies.

  5. Verify that all the data are imported and organized as expected in the new study or studies. You can compare the original and split/merged studies by viewing tracks on the Tracking Data Map and comparing statistics for the study or specific entities in the Studies page. Also see our tips for quality control.

  6. Once you have confirmed that the new study/ies are complete and organized, you can delete any studies that are no longer needed.

I have data from different sources that include duplicate fixes, but there might be slight differences between files. How can I organize and filter this?

The duplicate detection in Movebank depends on the tag-id and timestamp being exactly the same. So, for example, if seconds have been removed from some timestamps, or multiple tag IDs have been used, this will need to be addressed before importing your files.

Assuming tag IDs and timestamps are consistent, there are two main ways to address duplicates:

  • Before finishing a file import, as shown here, you can have Movebank (1) ignore records if the study already has a record with an exactly matching tag ID, timestamp, and sensor type; (2) ignore records if ALL mapped data attributes exactly match an existing record; or (3) import all records regardless of duplication. In this case, duplicate records are never imported to the database.

  • After importing files, you can run the duplicate filter to flag imported records as outliers. Here you can flag duplicate records based on exactly matching any set of attributes in the dataset, at minimum including a matching tag ID, timestamp, and sensor type. In doing this, you can simply retain the first matching record, or you can prioritize which record to retain based on conditions in other attributes, for example, you could tell it to keep the record with the lowest value in "gps hdop".