Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Welcome to the guide on how to back up and restore graph database repositories using the enapso-graphdb-cli tool. This document provides step-by-step instructions for installing the tool and executing backup and restoration operations with popular triplestores like GraphDB, Fuseki, and Stardog. This guide ensures efficient data management and security for your graph databases. Implementing these procedures is crucial for protecting against data loss and ensuring that your system graph database repository can quickly recover from potential disruptions.

For detailed information, to share feedback, or to contribute, please visit our GitHub repository and check out our package on npm. Sharing and reusing this technical documentation can simplify these processes and help raise awareness about our tools.

Prerequisites

Ensure Node.js is installed on your machine. If not, install it from the Node.js official website. This installation includes npm (Node Package Manager), which manages Node packages.

After installation, verify that Node.js and npm are successfully installed by doing the following:

  • Open a command prompt or terminal.

  • Run the command node -v and press Enter. This will display the version of Node.js if it is installed.

    node version.pngImage Added
  • Run the command npm -v and press Enter. This will display the version of npm if it is installed.

    npm version.pngImage Added

Installation

Install the enapso-graphdb-cli tool globally using npm:

Code Block
npm install -g @innotrade/enapso-graphdb-cli

Supported Triplestores

The tool supports the following triplestoresKnowledge Graph Platforms:

  • GraphDB

  • Fuseki

  • Stardog

Backup Process

Use the following script to create a backup of the graph database repository. This example is configured for a Fuseki repository but can be adjusted for other supported triplestores by changing the appropriate variables. Here's the complete process and explanation of the variables used in the script:

Create a New Script File

  1. Open a text editor such as Notepad++, or Visual Studio Code.

  2. Copy and paste the script content below.

Script Content

Code Block
languagebash
#!/bin/bash
echo "Backup Script for Exporting Ontology from Graph Database Using enapso-graphdb-cli"

# Set Variables
DB_URL="http://localhost/fuseki"
REPOSITORY_NAME="Test"
FORMAT="application/x-trig"
EXPORT_FILE="export.trig"
REPORT_FILE="report.txt"
TRIPLESTORE="fuseki"

# Remove Previous Report File
echo "Removing Previous Report File..."
rm $REPORT_FILE

# Export ontology
enapsogdb export --dburl $DB_URL --repository $REPOSITORY_NAME --targetfile $EXPORT_FILE --triplestore $TRIPLESTORE >> $REPORT_FILE 2>&1

echo "Backup made successfully"

Variables Explanation

  1. DB_URL: URL where the triplestore is running.

  2. REPOSITORY_NAME: Name of the repository.

  3. FORMAT: Data format, For Fuseki, the data is only returned in the application/x-trig format, so specifying the format is not necessary. For other triplestores like GraphDB and Stardog, the recommended format is application/x-trig due to its support of named graphs.

  4. EXPORT_FILE: Path and filename for the backup file.

  5. REPORT_FILE: Path and filename for the report file, which contains the response from the script's execution.

  6. TRIPLESTORE: Type of the triplestore (fuseki, graphdb, stardog).

Save the Script

  1. Save the file with a .sh extension, such as backup_script.sh.

Run the Script

  1. For Linux/MacOS: Make the script executable with chmod +x backup_script.sh and run it by navigating to the directory and typing ./backup_script.sh.

  2. For Windows: Ensure you have a tool like Git Bash, Cygwin, or WSL installed that can run Bash scripts. Navigate to the directory where the script is saved and execute it by typing ./backup_script.sh.

This setup allows you to create consistent and reliable backups of your graph database repositories across different operating systems and triplestore configurations.

Restore Process

The following script restores a graph database repository from a backup file. It includes an optional step to rebuild the cache on the ENAPSO platform, beneficial for those using the ENAPSO together or ENAPSO together Free services, which utilize a cache mechanism to enhance data management efficiency and speed. Here's the complete process and explanation of the variables used in the script:

Create a New Script File

  1. Open a text editor such as Notepad++, or Visual Studio Code.

  2. Copy and paste the script content below.

Script Content

Code Block
languagebash
#!/bin/bash
echo "Running Script for Restoring Graph Database Repository Using enapso-graphdb-cli Tool"

# Set Variables
DB_URL="http://localhost/fuseki"
REPOSITORY_NAME="Test"
FORMAT="application/x-trig"
SOURCE_FILE="export.trig"
REPORT_FILE="report.txt"
TRIPLESTORE="fuseki"

# Remove Previous Report File
echo "Removing Previous Report File..."
rm $REPORT_FILE

# Import ontology
enapsogdb import --dburl $DB_URL --repository $REPOSITORY_NAME --sourcefile $SOURCE_FILE --format $FORMAT --triplestore $TRIPLESTORE >> $REPORT_FILE 2>&1

# Rebuild cache (if applicable)
curl -X POST http://localhost/enapso-dev/graphdb-management/v1/build-cache >> $REPORT_FILE

echo "Graph Database Repository Successfully Restored"

Variables Explanation

  • DB_URL: URL where the triplestore is running.

  • REPOSITORY_NAME: Name of the repository.

  • FORMAT: Data format of the backup file, which is application/x-trig.

  • SOURCE_FILE: Path and filename for the backup file.

  • REPORT_FILE: Path and filename for the report file, which contains the response from the script's execution.

  • TRIPLESTORE: Type of the triplestore (fuseki, graphdb, stardog).

Save the Script

  1. Save the file with a .sh extension, such as restore_script.sh.

Run the Script

  1. For Linux/MacOS: Make the script executable with chmod +x restore_script.sh and run it by navigating to the directory and typing ./restore_script.sh.

  2. For Windows: Ensure you have a tool like Git Bash, Cygwin, or WSL installed that can run scripts. Navigate to the directory where the script is saved and execute it by typing ./restore_script.sh.

This setup allows you to restore your graph database repositories across different operating systems and triplestore configurations.

Additional Step Explanation

The curl request to rebuild the cache is an optional step, relevant for users whose repositories are hosted on the ENAPSO platform:

...

This command triggers the ENAPSO service to rebuild its cache using the latest uploaded data. This step ensures that any changes from the restoration process are promptly reflected, enhancing the performance and efficiency of queries against the updated repository.

Include this step if you are using use the ENAPSO platform to benefit from faster data retrieval times post-restorationthe cache mechanism to enhance data management efficiency and speed.

This enhanced documentation provides clear instructions and additional context for using the enapso-graphdb-cli tool effectively, ensuring users can manage their graph databases with confidence.

Conclusion

Regular backups and effective restoration capabilities are essential for managing graph database repositories securely. Using the enapso-graphdb-cli tool, users can easily safeguard their data and restore it quickly if necessary. It's important to maintain a consistent backup routine and periodically test your restoration process to ensure data integrity and minimize downtime.

For production environments, it is recommended to automate this process using cron jobs or scheduled tasks to ensure backups are performed regularly without manual intervention. Automating the backup process enhances consistency by maintaining a regular backup schedule, improves reliability by reducing the likelihood of human error, and increases efficiency by allowing the backup process to run in the background.

For additional support or details, refer to the enapso-graphdb-cli documentation on npm. This will ensure that your data management processes remain robust and reliable.

...