Diagnostics API

Use the Diagnostics API to run system and performance tests and get the status and results of test runs.

See Instabase API authorization and response conventions for authorization, success response, and error response convention details.

For the Diagnostics API, api-root defines where to route API requests for your Instabase instance:

import json, requests

api_root = "https://instabase.com/api/v1/diagnostics"

Run system test

Use this API to begin running system tests.

Request

import json, requests

headers = {'Authorization': 'Bearer {0}'.format(token)}
args = {
  'api_key': 'ABCDEFGHIJKLMOPQRS',
  'url_base': 'https://instabase.com',
  'root_test_folder_path': 'admin/my-repo/fs/Instabase Drive/test_root'
}
data = json.dumps(args)
resp = requests.post(api_root + '/system_test', headers=headers, data=data).json()

The body of the request must be a JSON object with the following fields:

  • api_key: the OAuth token to use for API calls for system tests.

  • url_base: the URL base to use for API calls during testing.

  • root_test_folder_path: (Optional but required if test_repo_name and test_drive_name aren’t provided) Specify the file path of a folder where a temporary sub-folder can be generated for testing purposes. All files used for testing are housed in the sub-folder. You can delete the sub-folder when testing is complete. While this field is optional, if omitted you must include the test_repo_name and test_drive_name parameters instead, and a folder path will be generated with those parameters.

  • test_repo_name: (Optional but must be passed along with test_drive_name) Specify the name of the repo where a temporary sub-folder can be generated for testing purposes. All files used for testing are housed in the sub-folder. You can delete the sub-folder when testing is complete. While this field is optional, if omitted you must include the root_test_folder_path parameter instead.

  • test_drive_name: (Optional but must be passed along with test_repo_name) Specify the name of the drive where a temporary sub-folder can be generated for testing purposes. All files used for testing are housed in the sub-folder. You can delete the sub-folder when testing is complete. While this field is optional, if omitted you must include the root_test_folder_path parameter instead.

The body of the request can also include the following optional parameters:

  • tags: A list of tags (passed as strings) used to specify which tests will be triggered by the API call. By default, if a test has any of the listed tags, it will be included. If a test must have all the listed tags to be included, set the require_all_tags parameter to true. If tags isn’t provided, then it will default to the on-prem tag. Otherwise, if tags is provided, it will use the list of tags provided as is.

  • exclude_tags: A list of tags (passed as strings) used to exclude specific tests from being triggered by the API call. If a test has any of the listed tags, it won’t be included. If exclude_tags isn’t provided, then it will default to the admin tag. Otherwise, if exclude_tags is provided, it will use the list of tags provided as is.

  • require_all_tags: A boolean flag used to affect the implementation of the tags parameter. When set to true, a test must have all tags listed in the tags parameter to be included. The default value is false. When left as false, tests with any of the tags listed in the tags parameter are included.

Response

If testing successfully begun:

HTTP STATUS CODE 200

{
  "status": "OK",
  "test_id": "system-test-run-identifier"
}

Get system test status

Use this API to retrieve the status of a system test run, and the results of that run if the tests have completed.

Request

import json, requests

headers = {'Authorization': 'Bearer {0}'.format(token)}
resp = requests.get(api_root + '/system_test/result/<test_id>', headers=headers).json()

Response

If successful:

{
  "state": "DONE",
  "status": "OK",
  "test_result":{
      "status": "OK",
      "msg": null,
      "was_successful": false,
      "errors": {
          "test_that_encountered_error": "traceback of error"
      },
      "failures": {
          "test_that_failed": "cause of failure"
      },
      "successes": ["test_that_succeeded"]
  },
  "total": 3,
  "start_time": 152392312.0923,
  "end_time": 163392532.18543,
  "username": "instabase_user"
}

The body of the response is a JSON dictionary with the following fields:

  • state: "PENDING" | "DONE" | "ABORTED": Indicates if the test run has finished.

  • status: "OK" | "ERROR": Indicates if the test status API call encountered an error.

  • msg: If status is "ERROR", contains information for the error.

  • test_result: A dictionary object that contains detailed information about the test run after state is DONE.

    • status: "OK" | "ERROR". Indicates whether the test run was able to be completed.

    • msg: If test_result[status] is "ERROR", contains information on the error that occured.

    • was_successful: Indicates whether all tests succeeded.

    • errors: A dictionary that include tests that failed due to an uncaught exception paired with a description of the exception.

    • failures: A dictionary that include tests that failed gracefully paired with a description of the failure.

    • successes: A list that includes the names of the tests that passed.

  • total: The number of tests that were run.

  • start_time: The time when the tests started running.

  • end_time: The time when the tests finished running.

  • username: The username of the user who ran the system tests.

The difference between a test error and failure is that the former fails due to uncaught exception and the latter due to unmatched results expected by the test. In either case, was_successful will be false, but subsequent tests will still be run.

Supported tests

Supported tests are tagged with on-prem. Currently, system tests cover the following scenarios:

  • Drive tests:

    • test_mkdir_and_create_file: tests creating directories and files.

    • test_copy_and_move_file: tests copying and moving files.

    • test_copy_and_move_folder: tests copying and moving folders.

    • test_list_dir: tests listing directories’ contents.

    • test_write_and_read_large_files: tests reading and writing files up to 100MB in size.

    • test_consecutive_append: tests consecutively appending to the ends of files.

    • test_unzip_and_extract: tests unzipping and extracting operations.

Run filesystem performance tests

Use this API to discover available filesystem performance tests and descriptions and begin running filesystem performance tests.

Note

You must enable the Marketplace before using this API.

Discover available filesystem performance tests

Request

import json, requests

headers = {'Authorization': 'Bearer {0}'.format(token)}
resp = requests.get(api_root + '/performance_test/filesystem', headers=headers).json()

Response

If successful:

HTTP STATUS CODE 200

{
  "status": "OK",
  "tests": [{'test_name': 'perf_test_read_file_api', 'test_description': "Invokes 'Read-File HTTP API' calls and meters performance"}, ...}
}

Start test

Request

import json, requests

headers = {'Authorization': 'Bearer {0}'.format(token)}
args = {
  'api_key': 'ABCDEFGHIJKLMOPQRS',
  'root_test_folder_path': 'admin/my-repo/fs/Instabase Drive/test_root', 
  'duration_seconds': 1, 
  'num_threads': [1], 
  'file_sizes_kb': [1],
  'tests': ['perf_test_read_file_api']
}
data = json.dumps(args)
resp = requests.post(api_root + '/performance_test/filesystem', headers=headers, data=data).json()

The body of the request must be a JSON object with the following fields:

  • api_key [str]: OAuth token used to make API calls during the tests.

  • root_test_folder_path [str]: Path of the folder where a temporary sub-folder will be generated for the test. All files used in the test will be housed in the sub-folder. The sub-folder can be deleted after test completion. Check ’teardown_test_status’ field in test results to see if the temporary sub-folder was successfully deleted.

  • duration_seconds [int] (optional): Length each test will run for. The default is 10 seconds (capped at 1800).

  • num_threads [list of ints] (optional): Specifies the amount of concurrency for file-operations that the tests will run at (values capped at 100).

  • file_sizes_kb [list of ints] (optional): File sizes used for running each test (valued capped by 200mb / max(num_threads)).

  • tests [list of str] (optional): Specifies the test cases to run. Valid test names are: ‘perf_test_read_file_rpc’, ‘perf_test_read_file_api’, ‘perf_test_write_file_api’, ‘perf_test_write_file_rpc’, ‘perf_test_write_file_multipart_rpc’.

  • verify_ssl_certs [bool] (optional): Enables/disables SSL certificate verification.

Note

These tests produce a substantial amount of temporary files as the test runner attempts to clean up generated files at the end of each test. If you are using more storage-heavy configurations, such as higher file sizes in file_sizes_kb, you can expect to see temporary storage bloat. If your storage bucket is versioned, this storage bloat might persist after the test has finished.

Info

Certain parameters have their maximum value capped, to reduce the risk of creating configurations that will cause your test to crash. Affected parameters have the value limit noted in their description.

Response

If testing began successfully:

HTTP STATUS CODE 200

{
  "status": "OK",
  "test_id": "<unique-test-ID>"
}

Get filesystem performance test status

Use this API to retrieve the status of a performance test run, and the results of that run if the tests have completed (status is no longer “PENDING”). If the test crashes, then a test might be permanently left in “PENDING” state. In this case, run the test again.

Request

import json, requests

headers = {'Authorization': 'Bearer {""}'.format(token)}
resp = requests.get(api_root + '/performance_test/result/<test_id>', headers=headers).json()

Response

Successful response:

{
  "status": "OK",
  "test_status": "DONE",
  "test_result":{
    "suite_name": "filesystem_test_suite", 
    "test_params": {
      "num_threads": [1], 
      "duration_seconds": 1, 
      "tests": ["_test_read_file_api"], 
      "test_file_sizes_kb": [1],
      "test_root_path": "tester/tests/fs/Instabase Drive/performance_tests"
    }, 
    "version": 1,
    "suite_status": {
      "status_code": "OK",
      "msg": ""
    },
    "teardown_status": {
      "status_code": "OK",
      "msg": ""
    },
    "start_time": "2021-12-07T20:51:25.792180",
    "end_time": "2021-12-07T20:52:58.948332",
    "results": [
    {
      "test_name": "Read-File HTTP API 1kb",
      "test_description": "Invokes 'Read-File HTTP API' calls and meters performance",
      "request_type": "HTTP",
      "file_info": {
        "file_size": 1,
        "file_paths": [
          "<paths where file(s) existed during this test>"
        ],
        "file_type": "<the file type used for the test>"
      },
      "start_time": "<Start time of this test as a datetime string>",
      "thread_count": 1,
      "errors": ["<list of observed errors during this test>"],
      "statistics": {
        "num_successes": "<number of successful executions of this operation>",
        "num_errors": "<number of errors during executions of this operation>",
        "num_total": "<total recorded executions of this operations>",
        "requests_per_second": "<operations succesfully executed per second>",
        "latencies_stats_seconds": {
          "mean": "<mean latency of successfully executed operations>",
          "median": "<median latency of successfully executed operations>",
          "90th_percentile": "<90th percentile latency of successfully executed operations>",
          "99th_percentile": "<99th percentile latency of successfully executed operations>",
          "75th_percentile": "<75th percentile latency of successfully executed operations>",
          "max": "<max latency of successfully executed operations>",
          "min": "<min latency of successfully executed operation>"
        }
      },
      "test_status": {
        "status_code": "OK",
        "msg": ""
      },
      "end_time": "<End time of this test as a datetime string>",
      "teardown_test_status": {
        "status_code": "OK",
        "msg": ""
      }
    }],
  } 
}

The body of the response is a JSON with the following fields:

  • status: "PENDING" | "DONE" | "ERROR": Indicates status of performance test.

  • msg: If status is "ERROR", contains information for the error.

  • test_result: A dictionary object that contains detailed information about the test run after status is DONE.

The test_result object contains the following fields:

  • suite_name: The name of the performance test that was run, such as filesystem_test_suite.

  • test_params: Object containing the parameters the filesystem performance tests ran over.

  • version: Version of the result schema for the performance test; expected value is "1".

  • suite_status: Status of the suite of tests, If suite_status.status_code is "ERROR", then `suite_status.msg should contain information about the error.

  • teardown_status: Status of cleanup after tests. If teardown_status.status_code is "ERROR", there might be artifacts from the performance test. To clean these artifacts up, navigate to the test_root_path specified in the test_params field and delete the folder path /performance_tests/<test_id>.

  • start_time: Starting time of the performance test, represented as a timestamp string.

  • end_time: Ending time of the performance test, represented as a timestamp string.

  • results: List containing results for each test run.

Run database performance tests

Use this API to discover available database performance tests and their descriptions and to trigger database performance tests.

Discover available database performance tests

Request

import json, requests

headers = {'Authorization': 'Bearer {0}'.format(token)}
resp = requests.get(api_root + '/performance_test/database', headers=headers).json()

Response

If successful:

HTTP STATUS CODE 200

{
  'status': 'OK', 
  'tests': [
    {'test_name': 'perf_test_delete_database_rpc', 'test_description': "Invokes a query to delete one row from testing table and meters performance. The query will be executed 'iterations' times"}, 

    ...,

    {'test_name': 'perf_test_update_database_rpc', 'test_description': "Invokes a query to update one row into testing table and meters performance. The query will be executed 'iterations' times"}]
}

Start test

Request

import json, requests

headers = {'Authorization': 'Bearer {0}'.format(token)}

args = {
  'api_key': 'ABCDEFGHIJKLMOPQRS',
  'iterations': 1,
  'scan_test_row_count': 100,
  'row_count': 10,
  'tests': ['perf_test_ping_database_rpc'],
}

data = json.dumps(args)
resp = requests.post(api_root + '/performance_test/database', headers=headers, data=data).json()

The body of the request is a JSON object with the following fields:

  • api_key [str]: OAuth token used to make API calls during the tests.

  • iterations [int]: Number of repetitions for each test case (max: 1,000,000).

  • scan_test_row_count [int] (optional): Number of rows inserted during the setup process for SCAN related tests (default: 1,000; max: 10,000).

  • row_count [int] (optional): Number of rows inserted during the setup process for READ/UPDATE related tests (default: 100; max: 1,000).

  • tests [list of str] (optional): Specifies the test cases to run. Valid test names are: ‘perf_test_delete_database_rpc’, ‘perf_test_insert_database_rpc’ ‘perf_test_insert_large_text_database_rpc’, ‘perf_test_join_database_rpc’ ‘perf_test_ping_database_rpc’, ‘perf_test_read_database_rpc’ ‘perf_test_read_large_text_database_rpc’, ‘perf_test_scan_database_rpc’ ‘perf_test_scan_index_database_rpc’, ‘perf_test_scan_sorted_database_rpc’, ‘perf_test_update_database_rpc’.

  • verify_ssl_certs [bool] (optional): Enables/disables SSL certificate verification.

Note

It’s possible to increase the configuration variables to a level that can cause your test to fail. The upper limits noted in the field description might not apply for all configurations and environments, for example, the speed of database operations can vary from different dialects. A smaller number of iterations (<= 1000) is recommended for time-consuming tests such as SCAN or JOIN. A larger number of iterations can be applied to lightweight tests such as PING or READ.

Response

If testing began successfully:

HTTP STATUS CODE 200

{
  "status": "OK",
  "test_id": "perf-test-db-<unique-test-ID>"
}

Get database performance test status

The API used for retrieving the status of a database performance test is the same as the one used for filesystem performance tests.

Request

import json, requests

headers = {'Authorization': 'Bearer {""}'.format(token)}
resp = requests.get(api_root + '/performance_test/result/<test_id>', headers=headers).json()

Response

If successful, the result is similar to the sample below:


{
  "status": "OK", 
  "test_status": "DONE", 
  "test_result": {
    "suite_name": "database_test_suite", 
    "test_params": {
      "tests": ["perf_test_ping_database_rpc"], 
      "username": "<user_name>", 
      "iterations": 1, 
      "scan_test_row_count": 1000, 
      "row_count": 100
    }, 
    "version": 1, 
    "suite_status": {
      "status_code": "OK", 
      "msg": ""
    }, 
    "teardown_status": {
      "status_code": "OK", 
      "msg": ""
    }, 
    "start_time": 1667954291.771355, 
    "end_time": 1667954294.275358, 
    "results": [
      {
        "test_name": "PING RPC", 
        "test_description": "Invokes a simple database query and meters performance. The query will be executed 'iterations' times", 
        "request_type": "RPC", 
        "start_time": "<Start time of this test as a string with ISO 8601 format (UTC timezone)>", 
        "test_status": {
          "status_code": "OK", 
          "msg": ""
        }, 
        "teardown_test_status": {
          "status_code": "OK", 
          "msg": ""
        }, 
        "errors": ["<list of observed errors during this test>"], 
        "statistics": {
          "num_successes": "<number of successful executions of this operation>", 
          "num_errors": "<number of errors during executions of this operation>", 
          "num_total": "<total recorded executions of this operations>", 
          "requests_per_second": "<operations succesfully executed per second>", 
          "latencies_stats_seconds": {
            "mean": "<mean latency of successfully executed operations>", 
            "median": "<median latency of successfully executed operations>", 
            "90th_percentile": "<90th percentile latency of successfully executed operations>", 
            "99th_percentile": "<99th percentile latency of successfully executed operations>", 
            "75th_percentile": "<75th percentile latency of successfully executed operations>", 
            "max": "<max latency of successfully executed operations>", 
            "min": "<min latency of successfully executed operations>"
          }
        }, 
        "end_time": "<End time of this test as a string with ISO 8601 format (UTC timezone)>"
      }
    ], 
    "successes": [
      {
        "test": "perf_test_ping_database_rpc"
      }
    ], 
    "total": "<total number of tests>", 
    "username": "<user_name>"
  }
}

The body of the response is a JSON object with the following fields:

  • status: "PENDING" | "DONE" | "ERROR": Indicates the status of the performance test.

  • msg: If status is "ERROR", contains information for the error.

  • test_result: A dictionary object that contains detailed information about the test run when status is DONE.

The test_result object contains the following fields:

  • suite_name: The name of the performance test that was run, such as database_test_suite.

  • test_params: Object containing the parameters of the database performance tests, including tests, iterations, scan_test_row_count and row_count.

  • version: Version of the result schema for the performance test; expected value is "1".

  • suite_status: Status of the suite of tests. If suite_status.status_code is "ERROR", then `suite_status.msg contains information about the error.

  • teardown_status: Status of cleanup after tests. In database performance tests, the teardown_status.status_code will always be "OK".

  • start_time: Starting time of the performance test, represented as a timestamp string.

  • end_time: Ending time of the performance test, represented as a timestamp string.

  • results: List containing results for each test run.