Advanced Database Manager

Power by Design




Select A Section

home

public service

database mgr

data access

data modeler

site notes

Currently In This Section

AxleBase









Notable Tests
( Please Scroll Down )

Section Pages

summary
description

design limits

notable tests

shortcomings

nomenclature

operation


                                                       

This is a legally copyrighted work,
so none of it can,
legally or honorably,
be copied, used, or claimed by anyone other than the owner.

__________________________________________________
Document Stats
Most recent update 20230707
approx. words
__________________________________________________
Important

All tests were run on mechanical disk drives.
To get a rough feel for the impact of solid
state drives, move decimal points two places
to the left in the published test durations.

This address is: jragan.com/axletests.htm#10.85.00

__________________________________________________



AxleBase Test Results


__________________________________________________
__________________________________________________

Contents Of This Document

Disk Drive Speed
System Snapshot
Test:     Very Large Table Build
Test:     Very Large Table Query
Test:     Small Table Query
Test:     Tiny Table Query
Test:     SQL Very Large Joins
Test:     SQL Large Updates
Test:     Virtualized Tables
Test:     Mixed Storage Media
Test:     Concurrency
Test:     Index Build Speed
Test:     SQL Data Inserts
Test:     SQL Large Row Stress
Test:     File Manager Stress
Test:     Super-System
Test:     Scaling The Super-System
Test:     BLOB Management
Test:     Distribution Performance
 
Appendices
      Empirical Verification
      Performance Comparisons: Dremel
      Scaling And Model Hierarchy Validation
      Methods & Objectives
      Data Table Descriptions
      Datatype Descriptions
      Shannon Data
      Test Computers
      Misc. Equipment
      Raw Test Values, standard configuration
      Raw Test Values, supersystem configuration
      Project Status


AxleBase Technology
Copyright 2003 - 2023 John E. Ragan.







__________________________________________________
__________________________________________________
AxleBase Tests
Section
System Snapshot



This address is: jragan.com/axletests.htm#01.01.00


system name AxleBase
software family database manager
family subtype relational database manager
capacity 20 exabytes per table
scalability hundreds of computers
______________ _______________________



End of System Snapshot.

Click to return to table of contents.
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Section
Test:   Very Large Table (VLT) Build



This address:   jragan.com/axletests.htm#10.85.05

Objective :
      Test ability to build and control a very large table.
      Insure that operation and size are transparent to the user.

Notes follow the Test Tabulation.



Test Tabulation
VLT Build
build date July 2015   (*see note)
test results  
errors none
problems none
performance to specification
data  
row count 100 Billion
( 100,273,828,500 )
indexed yes   3 indices
byte count in data 8 terabytes
( 8,021,779,680,000 )
byte count with indices 25 terabytes
( 25,315,676,628,858 )
file count in table 38 thousand
( 38,005 )
mechanics of interest  
file server count 11
file server computers 2,3,5,6,11,12,13,14,15,16,17
see computer appendix
file server disk drives see misc. equipment appendix.
host app. AxHandle
network see misc. equipment appendix.
data table t_citizen_status
see data table appendix
miscellaneous yield processor was on
______________________________________________

Summation :
      The ShowHealth command reports all components on-line
          and operational.
      The ShowTopology command returns the expected
          physical topology morphology.
      Queries and commands perform as expected.

* Build Date :
      Shows completion of the latest addition.
      The intermittent build has actually spanned a decade
          as equipment was added to the project.
      A contiguous build would run for years on this equipment.

Description Sources :
      AxleBase management reports.

SQL :
      Built entirely with standard ANSI-92 SQL.
      AxHandle generated each row and handed it to AxleBase
          in a SQL insert command.

Scalability :
      The AxleBase limit is 20 exabytes per table.
      The Model Hierarchy Validation hypothesis indicates that
          reaching and operating at that size is nearly certain.

Published Value Changes :
      A re-index was completed in October 2015.
      That changed some values because the new index is a
          different size and a hundred times faster.



End of Table Size test.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Section
Test:   Very Large SQL Queries



This address:   jragan.com/axletests.htm#10.85.10



Purpose :
      Show feasibility of querying very large relational databases.

Notes follow the Test Tabulation.


Test Tabulation
Very Large SQL Queries
date October 2015
data  
row count 100 Billion
( 100,273,828,500 )
byte count - data 8 terabytes
( 8,021,779,680,000 )
byte count with indices 25 terabytes
( 25,315,676,628,858 )
file count in table 38 thousand
( 38,005 )
data types queried alpha ( people names )
see data appendix line 6
alpha ( Shannon data )
see theory
numeric (sorted, mostly)
see data appendix line 7
queries  
query run durations  
      alpha (people names) 127.06 seconds *
789   million rows per second
      alpha ( Shannon data ) 142.59 seconds *
703   million rows per second
      numeric 25.76 seconds *
3.8   Billion rows per second
rows returned per query one
errors none
problems none
reservations none
performance to specification
mechanics of interest  
data table t_citizen_status
see data table appendix
configuration single node
standard database manager
query computer computer number 5
see computer appendix.
file server count 11
file server computers 2,3,5,6,11,12,13,14,15,16,17
see computer appendix.
data on USB 2.0 cables 14,000,000,000 rows
host app. AxHandle demonstrator
file server disk drives see misc. equipment appendix.
network see misc. equipment appendix.
miscellaneous Yield processor was on
and disks were defragged.
_______________________ _______________________

* Click here to see the raw test data in the appendix.

Operations appeared to the user to be using normal sized tables.

The Index Impact :
      If the indices were removed from this table,
      and assuming that it takes one thousandth of a second to
          locate, read, and analyze each row,
      then a query would run day and night for years.

Queries :
      Industry-standard ANSI-92 SQL queries.
      Each query was a simple select; e.g.
      select * from t_citizen where citizen_id = 89584262310
      select * from t_citizen where name = 'nadine'
      select * from t_citizen where location ='s56565656565656'

Other Query Types :
      Other query types also run well and are not recorded; e.g.
          between, in, greater than, min, max, etc.

File Fragmentation :
      Indices were built after the data to preclude fragmentation.

Test Validity :
      Unique values are in the table at precisely known locations.

Scalability :
      The AxleBase design limit is 20 exabytes per table.
      The Model Hierarchy Validation hypothesis indicates that
          reaching and operating at that size is probable.
      Internal mechanisms treat an index as an open-ended
          object, so size is irrelevant.
      Indexing is designed to conform to any hardware
          and network topology.

Published Value Changes :
      A table re-index was completed in October 2015.
      Some values have changed because the new index is larger
          and a hundred times faster.

( Anomaly :
      With no idea why the VLT join was faster than the VLT query, I apologize for my failure and for cessation of VLT research. They used the same table and the same hardware, so one would expect the complex join to be far slower. My surprise prompted multiple failed attempts (not shown) to reverse the disparity. I end the project with only theories that are unworthy of publication.
            JR )



End of Very Large Query test.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Section
Test:   Small Table SQL Queries


This address:   jragan.com/axletests.htm#10.85.12

Purpose :
      Test the query speed of small tables.

Notes follow the Test Tabulation.


Test Tabulation
Small Table SQL Queries
test date October 2011
row count 100 million
( 101,266,000 )
byte count 8 billion
( 8,101,280,000 )
queries  
column data type alpha (people names)
see data appendix line 4
      time to return 0.046 second
      rows returned 1
column data type alpha ( "Shannon Data")
      time to return 0.25 seconds
      rows returned 6
column data type numeric
see data appendix line 7
      time to return 0.125 second
      rows returned 1
mechanics of interest  
configuration single node
standard database manager
query computer computer number 3
see computer appendix.
host app. AxHandle demonstrator
data table t_citizen_status
see data table appendix
disk drives see misc. equipment appendix.
  and two remote computers
network see misc. equipment appendix.
miscellaneous yield processor was on
disks were defragged
_______________________ _______________________

Test Result Summation :
      Errors :     None.
      Problems :     None.
      Performance :     To specification.
      Operations appeared to be in normal sized tables.
      Other queries such as between, in, less than, greater,
          min, max, etc. were run and not recorded..

Test Designs :
      Industry-standard ANSI-92 SQL queries.
      Returns were validated.
      Simple queries to ease comparison and discussion.
      Standard tables with a few indices.
      Examples :
      select * from t_citizen where updater = 'laura'
      select * from t_citizen where location = 's877874857901'
      select * from t_citizen where citizen_id = 22459145625

Indexing :
      Indices were built after completing data insertion to preclude fragmentation.



End of Small Table Query tests.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Section
Test:   Tiny Table SQL Queries


This address:   jragan.com/axletests.htm#10.85.13

Purpose :
      Test the query speed of tiny tables.

See preceeding notes following the Small Table Test Tabulation.


Test Tabulation
Tiny Table SQL Queries
test date October 2011
row count 25 million
( 25,316,500 )
byte count 2 billion
( 2,025,320,000 )
queries  
column data type alpha (people names)
see data appendix line 4
      time to return 0.046 second
      rows returned 1
column data type alpha ( "Shannon Data")
      time to return 0.171 second
      rows returned 1
column data type numeric
see data appendix line 7
      time to return 0.109 second
      rows returned 1
mechanics of interest  
configuration single node
standard database manager
query computer computer number 3
see computer appendix.
host app. AxHandle demonstrator
data table t_citizen_status
see data table appendix
disk drives see misc. equipment appendix.
  and two remote computers
network see misc. equipment appendix.
miscellaneous yield processor was on
disks were defragged
_______________________ _______________________



End of Tiny Table Query tests.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Section
Test:   Very Large Table Joins


This address:   jragan.com/axletests.htm#10.85.15

Purpose :
      1. Test the ability to join very large objects.
      2. Ascertain approximate large object join speeds.
      3. Insure transparency of the operation.

Notes follow the Test Tabulation.



Test Tabulation
Very Large SQL Table Joins
test date October 2015
results  
query duration 35.42 seconds
rows returned 1
errors none
problems none
performance to specification
tables joined  
quantity 4
table name t_citizen_status
see test table appendix
. . . . row count 100 Billion
( 100,273,828,500 )
. . . . byte count - data 8 terabytes
( 8,021,779,680,000 )
. . . . byte count with indices 25 terabytes
( 25,315,676,628,858 )
. . . . file count in table 38 thousand
( 38,005 )
table name t_duplicate
see test table appendix
. . . . row count 100 million
( 101,266,000 )
. . . . table byte count 26 gigabytes
( 26,518,369,781 )
table name t_config
see test table appendix
. . . . row count 100 million
( 101,266,047 )
. . . . table byte count 11 gigabytes
( 11,647,920,329 )
table name t_status_codes
see test table appendix
. . . . row count 11
. . . . table byte count 2,859
mechanics of interest  
join data types alpha (people names)
see data appendix line 4
and data appendix line 6
configuration single node
standard database manager
query computer computer number 5
see computer appendix.
file server count 11
file server computersnumbers 2,3,5,6,11,12,13,14,15,16,17
see computer appendix.
data on USB 2.0 cables 14,000,000,000 rows
file server disk drives see misc. equipment appendix.
network see misc. equipment appendix.
host app. AxHandle demonstrator
miscellaneous yield processor was on
disks were defragged
_______________________ _______________________

Query Design :
      Industry-standard ANSI-92 SQL was used.
      Result shown was from following query :
          select a.updater, b.owner, a.status, c.location,
            d.description from t_citizen_status a
          left join t_config b on a.updater = b.owner
          left join t_duplicate c on b.owner = c.updater
          right join t_status_code d on a.status = d.status
          where a.updater = 'monica'
      All four tables were joined into a single dataset.

Operations appeared to the user to be using normal sized tables.

Validity :
      Unique values are in the tables at precisely known segment and row number locations.

Scalability :
      The AxleBase limit is 20 exabytes per table.
      The Model Hierarchy Validation hypothesis now indicates
          that reaching and operating at that size is almost certain.
      Internal mechanisms treat an index as an open-ended object
          so that size is irrelevant to it.
      Indices are designed to conform to any system topology.

Published Value Changes :
      A re-index was completed in October 2015.
      That changed some values because the new index is larger
          and a hundred times faster.

( Anomaly :
      With no idea why the VLT join was faster than the VLT query, I apologize for my failure and for cessation of VLT research. They used the same table and the same hardware, so one would expect the complex join to be far slower. My surprise prompted multiple failed attempts (not shown) to reverse the disparity. I end the project with only theories that are unworthy of publication.
            JR )



End of Large Table Join test.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Section
Test:   SQL Updates



This address:   jragan.com/axletests.htm#10.85.20

Purpose :
      Test ability to update large entities.
      Test update of distributed large objects.
      Ascertain approximate update speeds.

Additional comments follow the Test Tabulation.



Test Tabulation
SQL Updates
test date 27 Nov 2011
tiny table  
. . . . row count 25 million     ( 25,316,500 )
. . . . byte count 2 billion     ( 2,025,320,000 )
updates  
type alpha
see data appendix line 4
. . . . . . . . duration 0.156 second
. . . . . . . . rows updated 1
type numeric
see data appendix line 7
. . . . . . . . duration 0.156 second
. . . . . . . . rows updated 1
type alpha
( "Shannon Data")
. . . . . . . . duration 0.125 second
. . . . . . . . rows updated 6
small table  
. . . . row count 100 million     ( 101,266,000 )
. . . . byte count 8 billion     ( 8,101,280,000 )
updates  
type alpha
see data appendix line 4
. . . . . . . . duration 0.109 second
. . . . . . . . rows updated 1
type numeric
see data appendix line 7
. . . . . . . . duration 0.187 second
. . . . . . . . rows updated 1
type alpha
( "Shannon Data")
. . . . . . . . duration 0.703 second
. . . . . . . . rows updated 9
very large table * See VLT deferment below.
mechanics of interest  
configuration single node
standard database manager
query computer computer number 6
see computer appendix.
disk drives see misc. equipment appendix.
network see misc. equipment appendix.
host app. AxHandle demonstrator
data table t_citizen_status
see data table appendix
miscellaneous yield processor was on
disks were defragged
_______________________ _______________________

Query Design :
      Industry-standard ANSI-92 SQL.
      update t_citizen_status set flag = true where updater = 'lauren'

Result :
      No errors.
      Performance to specification.
      Times are shown in the following table.
      Operations appeared to be to a normal sized table.

Validity :
      Unique values are in the table at known locations.
      Row contents were verified before and after query.

* VLT Deferment :
      Updating a test table disturbs its known characteristics, so a backup must be made before the test and restored afterwards to maintain those characteristics. The VLT (very large table) backup requires more hardware than is currently on hand.
      Therefore the VLT test must be deferred.



End of Large Update test.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Section
Test:   Virtualized Tables


This address:   jragan.com/axletests.htm#10.85.25

Click here for a description of virtualization on the AxleBase description document. For reasons given there, tests have been removed from this document. It will be one of the features of the AxleBase technology that is not covered.

This ten or fifteen year-old note was found while editing documents in 2022: "A virtual table of trillions of rows was joined to billion-row tables." (The builder remembers little about those tests except that they beat the daylights out of the AxleBase-lab hardware.)



End of table virtualization.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Section
Test:   Mixed Storage Media


This address:   jragan.com/axletests.htm#10.85.30

Purpose :
      Verify AxleBase storage media eclecticism.
      AxleBase was designed for a distributed operation across generalized and disparate hardware of uncontrolled quality and quantity. The question addressed is whether or not that specification has been maintained through development.

Notes follow the Test Tabulation.



Test Tabulation
Storage Media Containing The Table
Test Date 14 Jan 2012
CD ROM, 52x 650 meg
solid state drive USB, 2.0 4 gig
floppy drive from the last century 1 meg
external hard disk ESATA 2 tb
hard disk PATA 32 gig

Query Access Sequence
medium
PATA
CD
PATA
ESATA
floppy disk
PATA
solid state flash
CD
PATA
floppy disk
PATA
solid state flash
PATA
ESATA
____________________

Caveat :
      This test attempted to stress AxleBase, and offers no recommendations.

Test Result Summation :
      No errors.
      Performance to specification.
      Operations appeared to be in a normal table.
      If the operating system can recognize a pencil and paper as a storage device, then AxleBase can use it.

Media Selection :
      Every medium that was on hand in the lab was used.

Test Structure :
      A table of 14,250 rows was created.
      It was spread across all of the storage media.
      Rows were mixed so each medium was accessed repeatedly.
      The entire table was returned by every query.
      select * from t_citizen_status
      ( The read-only CD precluded a write test. )

Times :
      Timing the test was considered pointless due to the low speed of some media such as floppy disks from the last century.



End of Mixed Media test.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Section
Test:   Concurrency


This address:   jragan.com/axletests.htm#10.85.35

As A Database Server

AxleBase is a database manager, and he can be inserted into a server, which creates a database server. In that configuration, concurrency performance is primarily controlled by the server and not the database manager.

A development and demonstration server has been built in the AxleBase lab, which functions well for those two objectives, but it is not intended to be an enterprise-level server. Therefore, server concurrency has not been tested.

As A Multi-Source Installation

The AxleBase design allows multiple source operations. A mix of multiple servers and single instances can share databases even when those sources are uncommunicative and geographically dispersed. That kind of operation presents unusual challenges that are totally alien to standard database managers.

Successful tests of that kind of operation have been executed.

However, that operation, with its abstract challenges, is so unusual that it is hard to understand, and its tests become meaningless and mis-guiding for most people. At the very best, the natural tendency is to equate test results with the performance of standard database servers. Therefore, those test results have been removed and discarded.



End of Concurrency test.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






_________________________________________________
__________________________________________________
AxleBase Tests
Section
Test:   Index Build Speed



This address:   jragan.com/axletests.htm#10.85.40

Objective :
      Determine time required to index an existing dataset.

Notes follow the Test Tabulation.



Test Tabulation
Index Build Speed
test date April 2015
Data  
rows 12.5 billion rows
byte count 100 billion bytes
data types numeric, shannon, alpha
column widths 20, 16, 20
Test Results  
rows per second approx. 7,575
rows per day 654 million
errors none
problems none
performance to specification
Mechanics Of Interest  
host app. AxHandle demonstrator
computers various
data table t_citizen_status
see data table appendix
network see misc. equipment appendix.
miscellaneous yield processor was on
_______________________ _______________________

Marginalia :
      Indexing is so complex that we could easily become mired down in the complexities. Therefore, this test is designed to be as simple as possible while giving a "ballpark" performance feel.
      AxleBase can also index "on the fly" as data is entered.
      Indexing on the fly seems faster to the user because each row is indexed so quickly.
      But most large tables are expected to be indexed after they are created.
      AxleBase can also create multi-column indices, but single colums will be easier for you to evaluate.



End of Index Build Speed test.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Section
Test:   SQL Inserts


This address:   jragan.com/axletests.htm#10.85.45

Using one computer avoided test complications of concurrency.

Large tables are usually indexed or re-indexed after data acquisition.

* Time includes the GUI front end operation, which builds each insert row, hands it to Axlebase, and displays the operation.



Test Tabulation
SQL Insert Speed
test date February 2015
Test Results  
speed ~ 0.00075 second per row *
rows inserted 350 million
Mechanics Of Interest  
host app. AxHandle demonstrator
query computer computer number 14
see computer appendix.
database AxHandle demo
data table t_citizen_status
see data table appendix
indexed no
miscellaneous yield processor was on
_______________________ _______________________



End of Insert test.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Section
Test:   Very Large Row Stress


This address:   jragan.com/axletests.htm#10.85.70

Purpose :
      To demonstrate AxleBase ability to handle large data rows.

Notes follow the Test Tabulation.



Test Tabulation
Very Large Row Stress
test date Nov 2012
large row  
data  
. . . . . . . row size two megabytes
( 2,000,053 bytes )
. . . . . . . row count 3,005
. . . . . . . byte count 6,010,159,265
insert time average 0.049 second
select time 1.049 sec.
includes load from disk
very large row  
data  
. . . . . . . row size ten megabytes
( 10,000,053 bytes )
. . . . . . . row count 500
. . . . . . . byte count 5,000,026,500
insert time average 0.59 second
select time 3 min. 7 sec.
includes load from disk
mechanics of interest  
host app. AxHandle demonstrator
query computer computer number 6
see computer appendix.
database demo_main
data table t_large row
see data table appendix
indexed yes
miscellaneous yield processor was on
_______________________ _______________________

Test Result Summation :
      No errors and no problems.
      AxleBase performed to specification.
      But the little machine had to be rebooted. ': )

Limit :
      AxleBase is designed for rows up to 2 gigabytes where the infrastructure can handle it.

Speed :
      The small index size yielded ridiculously high speed.
      But hardware handling very large rows slowed delivery.



End of Very Large Row test.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Section
Test:   File Manager Stress


This address:   jragan.com/axletests.htm#10.85.75

Purpose :
      Stress the sub-systems that manage data files.
      Can AxleBase automatically, and as needed :
          1. Create millions of data files for a table ?
          2. Manage those millions of data files ?
          3. Store data in those files ?
          4. Query the table with standard ANSI-92 SQL ?
          5. Do all of that reliably and accurately ?
          6. Do it transparently so it appears simple to the user ?

Notes follow the Test Tabulation.


Test Tabulation
File Manager Stress
date June 2016
data  
actual data files in table 10 million
10,009,096
unexpressed table size(**) 20 petabytes
20 quadrillion bytes
20,018,192,000,000,000 bytes
200 trillion rows
tests  
diagnostic commands  
  ScanFiles
  ShowHealth
  ShowTopology
  ShowLocations
SQL  
  select count
  select from where
  delete where
  update where
  purge
  insert into
mechanics of interest  
configuration single node
standard database manager
computer computer number 17
see computer appendix.
network see misc. equipment appendix.
database demo_main
data table t_citizen_status
see data table appendix
rows per data file 100
host app. AxHandle demonstrator
miscellaneous yield processor was on
disks were not defragged
_____________________ ______________________

** Unexpressed size is if it were allowed to fill the data files.

Problem Constraints :
      1. Resources : The little AxleBase lab cannot afford enough equipment to house even one petabyte-size table.
      2. Time : Inserting two hundred trillion rows at one thousandth of a second per row in the AxleBase lab would require six thousand years.

Solution :
      The standard data file size is two-gigabytes.
      Support of that size has been tested for years.
      So this database was configured for very tiny data files.
      AxleBase stops filling each file when it reaches the
            configuration limit, and starts filling the next file.
      That allows a test of very large table management
            by creating vast numbers of data files.
      Insert data using industry-standard ANSI-92 SQL.

Situation :
      The build was stopped when the computer started laboring.
      Table now contains 10,000,999 data files.
      If those data files were full:
            Table size of twenty petabytes.
            200 trillion rows.

Test Result Summation :
      AxleBase performed to specification.
      Table diagnostics show optimal conditions.
      SQL mechanisms operate correctly.
      AxleBase file manager sub-system demonstrated
      twenty-petabyte capability.
      The operating system evinced great stress that might be
          caused by so many files on a single disk.

Conclusion :
      Since all mechanisms performed to spec with no sign of stress
      it is believed that AxleBase continues to achieve his design limit of
      20 exabytes per table.

For actual queries of tables containing billions of rows, see the Very Large Query test.



End of File Manager test.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Section
Test:   Super-System Performance


This address:   jragan.com/axletests.htm#10.85.80

Purpose Of Test :
      For extraordinary power, AxleBase can be configured as an axsys super-system driving many computers to deliver super-computer power for queries and utilities.
      When configured as an axsys, AxleBase continues to function as a standard relational database manager, so only the database administrator knows.

Notes follow the Test Tabulation.


Test Tabulation
Axsys Super-System Performance
test date November 2015
data description  
table t_citizen_status
see data table appendix
row count 100 Billion
( 100,273,828,500 )
byte count - data 8 terabytes
( 8,021,779,680,000 )
byte count with indices 25 terabytes
( 25,315,676,628,858 )
file count in table 38 thousand
( 38,005 )
data types queried alpha ( people names )
see data appendix line 6
alpha
see Shannon data theory
numeric (sorted, mostly)
see data appendix line 7
query results  
query run durations  
      alpha (people names)     6 . 96 seconds *
      alpha ( Shannon data )   16 . 12 seconds *
      numeric     2 . 46 seconds *
raw data click here for raw test data
errors none
problems none
reservations none
performance to specification
mechanics of interest  
control node computer computer number 10
see computer appendix.
control node app AxHandle demonstrator
query node count 11
query node computers 2,3,5,6,11,12,13,14,15,16,17
see computer appendix.
data on USB 2.0 cables 14,000,000,000 rows
query node apps
(database servers)
AxServer demonstrators

* Click here to see the raw test data in the appendix.

System Imbalance :
      The system runs at the speed of the slowest node.
      The primary objective was to build a hundred-billion row table, and to minimize hardware loss from that 24x7 pounding due to limited funds. Therefore, data movement after build was resisted. The resulting load imbalance slowed the test system.
      The test used 11 computers, 5 of which were overloaded with more than twice as much data as the other computers contained. The slower processing time of those 5 overloaded computers set the time for the entire system. The data load is shown in the imbalance table.

AxleBase allows combining query nodes with file servers if desired.

Scalability Of Super-System :
      See the following Scaling The Super-System section and the Limits page.

Test Design :
      Industry-standard ANSI-92 SQL queries.
      Returns were validated.
      Simple queries were used to ease comparison and discussion.
      Example :
      select * from t_citizen where location = 's877874857901'

Estimating AxleBase Super-System Speed :
      The differential is a nearly linear direct correlation with
          axsys node quantity increase in a tuned system.
      Performance estimation where the base is performance
          of the standard database manager configuration :
      time = (base time) / (number of axsys nodes)
      speed = (base speed) * (number of axsys nodes)

Ease Of Use :
      Super-system operation in a production environment is not for the neophyte or faint of heart. An experienced master DBA (database administrator) with appropriate assistance is recommended.



End of Super-System test.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Section
Scaling The Super-System


This address:   jragan.com/axletests.htm#10.85.83

This is not a test.
This is a scaling demonstration.

Purpose :
Demonstrate the super-system's scaling ability based upon :
      The preceeding test results.
      Known characteristics of the system.
      System configuration features.
      The system's Model Hierarchy Validation.

AxleBase is configured as a super-system by loading it onto multiple computers, and configuring each AxleBase instance to become a node.
      ( Every AxleBase instance looks for an axsys configuration
          for itself when it opens any database.
      If it finds one for itself, it configures itself as directed.
      A query node executes queries and instructions.
      A file server passively stores data.
      A control node sends queries and instructions to the
          query nodes and harvests their responses.)

The test in the preceeding section was run with eleven computers. AxleBase configured each computer as a node in the axsys, and each also served as a file server.
      The VLT (very large table) is so big that every node was over-loaded in the previous test.
      Most nodes were assigned at least six billion rows.

The axsys in the test could be expanded by assigning more nodes to the test. The table is so big that a hundred nodes could easily be used.

Expansion Impact :
      If we continue to use each computer as the file server for its node, then there will be no increased disk contention and no network contention.
      The increased load on the controller node will be negligible since it merely harvests and combines the query nodes' work.
      Each query node's work will be reduced in direct proportion to the increased number of nodes.

Therefore, increasing the number of nodes to ten times as many will increase the speed of the previous test by ten times.

Based upon the following tabulation, AxleBase could query a trillion rows of numeric data in 2 and a half seconds using 110 old desktop computers.
Or under a second with more nodes.

* Please remember that these speeds are based on using turn-of-the-century single-processor desktop computers.


Test Tabulation
Axsys Super-System Scaling Projection
Projecting From The Previous Test
data description  
table same
row count 100 Billion
( 100,273,828,500 )
byte count - data 8 terabytes
( 8,021,779,680,000 )
byte count with indices 25 terabytes
( 25,315,676,628,858 )
file count in table 38 thousand
( 38,005 )
data types queried same
query node counts  
projected count 110
previous test count 11
projected query run durations  
      alpha (people names)   . 696 seconds
  144 billion rows per second
      alpha ( Shannon data )   1 . 612 seconds
  62 billion rows per second
      numeric   . 246 seconds
  408 billion rows per second
previous query run durations  
      alpha (people names)   6 . 96 seconds
  14.4 billion rows per second
      alpha ( Shannon data )   16 . 12 seconds
  6.2 billion rows per second
      numeric   2 . 46 seconds
  40.8 billion rows per second

Scalability Limit :
      See the Limits page.
      A point of diminishing returns may be met as the axsys speed approaches or passes infrastructure ability such as the speed of the controller node.

Administration :
      Table size hardly impacts adminstration except for increased hardware maintenance.
      The same is true of axsys size.
      However, if the axsys is enlarged after the table is built, as in this demonstration, that will require unusual work by the DBA. In this hypothetical exercise, the DBA must plan the operation, make backups, carefully divide and relocate large datasets, update the database system files, and test everything, all while meeting daily user needs, all of which will take some time.



End of Scaling The Super-System.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Section
Test:   BLOB Management


This address:   jragan.com/axletests.htm#10.85.85

BLOB is an acronym for binary large object. A blob may be a photograph, piece of music, ultrasound recording, geology probe, movie, etc., etc. BLOBs are not data, but are objects that we want a database manager to manage.

BLOB management is simple for AxleBase, but it gets an inordinate amount of attention from some people, which is why this little report is presented.


Test Tabulation
BLOB Handling
date daily
from 2005 to present *
photographs  
quantity 1,138
bytes in blobs ** 123 meg (123,306,752)
music  
quantity 2,760
bytes in blobs ** 13 gig (13,730,381,824)
speed  
query time see following note ***
mechanics of interest  
host app personal system
usage many times per day


* This AxleBase BLOB app. has been in use since approximately 2005.
      The GUI app asks AxleBase for BLOB tables which it displays. The user selects one or more photographs, music, etc., and AxleBase returns them to the app for display, play, etc.

** Photos and music are all compressed.

*** Some people have no idea of the nature of data or of how a database manager works so this query-speed item was entered for them.
      For a database manager like AxleBase that can query rows at a rate of 40 billion or more per second, finding a BLOB in one of these tiny tables is instantaneous.
      Then, displaying a photo or any other manipulation done by any other software is done at the speed of that software and the hardware, with which AxleBase has nothing to do.



End of BLOB handling.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Section
Distribution Performance


This address:   jragan.com/axletests.htm#10.85.90

The performance characteristics of various distributed topologies and configurations have not been formally tested.

      However, the system was run for over a decade in various distributed configurations as needed for other testing without avoiding any configurations.
      The number of computers used varied from one to fifteen, which were limited only by available funds.
      Database and table sizes varied in size and topological distribution.
      The operating systems varied according to what was available when a test was run, some of which are shown in the Test Computer appendix.
      The Misc. Equipment appendix shows the latest network.

Findings:
      It was found that configurations were nearly irrelevant for performance and reliability when running stress and performance tests.
      No configuration of topology, hardware, and operating system ever caused a fault that was detected in the AxleBase system.



End of Scaling Distribution Validation.
Click to return to table of contents at the top
Or press Alt left arrow to return to text.




__________________________________________________
__________________________________________________
AxleBase Tests
Appendices

This address:   jragan.com/axletests.htm#10.90.00

__________________________________________________
__________________________________________________



Click to return to table of contents at the top
Or press Alt left arrow to return to text.




__________________________________________________
__________________________________________________
AxleBase Tests
Appendix
Empirical Verification Of Tests And Claims



Skepticism is welcome and will be encouraged.
( Even the builder sometimes has a hard time believing the tests on this page and has re-run them many times to convince himself. )

Performance Measurement:
          A timer with a GUI display is built into the AxHandle app, so operations are routinely timed.
          Reported times are averages of multiple runs.

Data Object Size And Configuration:
          AxleBase has management report utilities such as the ShowTopology command that can be run at any time.

Courtesy Demos:
          Courtesy test demonstrations are available for interested businesses, governments, and academia. Facilities limit each attending group to six people.

Longevity Problem:
          The VLT (very large table) is housed on many desktop computers, it is too big to be copied, and there is no power protection. Since it's longevity is problematic and it took years to build, if you want to see a demonstration using it, do it soon.
          Hardware Failure. One of the computers died in December 2016. The data on it may have survived, but the small databases are offline and seven billion rows of the VLT are offline. Demonstrations using those databases and the VLT can be done only if an XP or win2k computer can be found, configured, and brought on line in the system.

Offsite Validation:
          If an organization presents a serious interest in acquisition, has the resources and skilled personnel needed to operate an AxleBase installation, and intellectual property can be insured, consideration may be given to allowing it to run evaluations on its equipment for a limited period. Mentoring and technical support will be provided.

Scientists:
          Computer scientists will be given special consideration for on-site demonstrations, but protection of intellectual property precludes allowing them to have copies of AxleBase or of its databases.



End of Empirical Verification appendix.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.




__________________________________________________
__________________________________________________
AxleBase Tests
Appendix
Performance Comparisons: Dremel


This address:   jragan.com/axletests.htm#10.90.10

The complexity of test construction and result analysis is one of the many reasons that Microsoft, Oracle, and others have wisely disallowed open comparisons. Tests are too hard to control and understand to be effectively compared by the average person.

However, the AxleBase developer unwisely publishes his tests. But since billion-dollar marketing budgets and thousand-man open-source projects will try to discount his work, it is done in hopes that others will mitigate mercifully in his favor.


Dremel
Google's Massive Web Scale Database Manager
versus
Little AxleBase
(Weighing in at less than a megabyte.)

Google presented its Dremel database server at the 36th International Conference On Very Large Databases, and published its presentation in a brief white paper. The following steps roughly compare the Dremel test to the AxleBase test.

Query 3 in their paper was used for comparison because it lent itself to comparison to AxleBase and to the AxleBase test.

The AxleBase test used in this comparison is shown in the Super-System table. The string data query was compared because it matched the Dremel data.

1. Determine Hardware Differential :
      This is a software test, so equalize the hardware.
      Dremel used 2,900 computers.
      AxleBase used 11 computers.
      Divide their 2,900 by the AxleBase 11 computers.
      2,900 / 11 = 264

2. Determine Data Differential :
      The amount of data is almost irrelevant to AxleBase.
          See the limits page.
      But the data structure is critical.
      The AxleBase table had 100 billion rows.
      The Dremel table had only 24 billion rows.
      The AxleBase table was 4.167 times bigger.
      100,000,000,000 / 24,000,000,000 = 4.167

3. Original Run Time :
      The original AxleBase run time was 6.96 seconds.

4. Apply Hardware Differential:
      Divide that time by the hardware differential to get the
          AxleBase run time with 2,900 computers.
      6.96 / 264 = 0.0264 of one second run time.

5. Apply Data Differential :
      Equalize the data load by dividing the above run time by
          the data differential.
      0.0264 / 4.167 = 0.0063 of one second run time.

6. Compare The Normalized Performances :
      Dremel : 27 seconds
      AxleBase : Six thousandths of one second (0.0063).
      Result : 27 / 0.0063 = 4,285.7 times faster.


Therefore,
AxleBase was four thousand times faster than Dremel.


Percentage :
      Note the .7 on the result. It was retained to remind us that a system that is 70% faster is remarkable and the result here is nearly a half million percent faster.

Standard Configuration :
      AxleBase can also run on one computer like Oracle and MS Sql Server. The single-computer test used the same data that tested the parallel processor system.

One computer was almost as fast as Dremel's
2,900 parallel processing computers
.
The AxleBase technology gave mainframe speed to desktop computers, and that is the point that everybody missed ; that intelligent, conscientious, plodding software engineering is more powerful than expensive hardware and fancy parallel processing.

Hardware :
      Standard single-processor turn-of-the-century desktop computers that were scrapped or bought used. Windows 2000 was used. Disks were the cheapest locally available disk drives.

Reminders :
      Performance was achieved by a high-end relational database manager that continued all tasks on the description page such as assessing the health of the 38,000 files in the table during every query.
      AxleBase is so fast that he captures the hardware. Therefore, he has a YieldProcessor toggle that can force him to pause tens of thousands of times per second to allow other systems to run on the machine, and he must check its value each time that he passes it. YieldProcessor is yes in all tests so the operating system's GUI can operate. How fast would he be if that toggle were entirely removed?

Security And Reliability Under Load :
      See the description page.

Unstructured Data :
      Beware. Unstructured data is sometimes an excuse for failure to employ professionals for design of installation and database.

Daydreaming :
      Since the absolute value of AxleBase's test time was 6.96 seconds with only 11 computers, imagine his performance with, not Dremel's thousands, but only a hundred desktop computers.



End of Performance Comparisons appendix.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Appendix
Scaling and
Model Hierarchy Validation



Concept Source:
          Model hierarchy validation was presented by Kevin Heng (Maximilian University, Germany) as a model validation methodology applicable across all science disciplines where full-size modeling is impossible such as in his field of astrophysics.
(*ref. Source: American Scientist, May-Jun 2014, pp. 174-177. )
A requirement is that the model be unchanged and unadjusted across all magnitudes while delivering consistent result quality.

Hierarchy:
          Years of AxleBase testing have proceeded through ever larger data entities as more equipment was brought online, thus creating a controlled orthogonal, or linear, model hierarchy. Tests have grown through many orders of magnitude from tables with hundreds of rows to a hundred billion rows.

Result:
          Those vast magnitude increases with an unchanged model across a decade of tests have yielded a linear progression of test result magnitudes with unchanged qualitative behaviors. Results have always met specifications and have done so without error.

Conclusion:
          Model hierarchy validation indicates that AxleBase can scale to its design limits of exabytes in trillions of rows.
          ( Also, see the File Manager Stress Test projection.)



End of Model Hierarchy Validation appendix.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Appendix
Methods and Objectives



Test Objectives :
      1. Show that AxleBase works.
      2. Demonstrate the vast data stores that he can manage.
      3. Test the speed in managing those great tables.
      4. Demonstrate some of his advanced technologies.
      5. Show adherence to the relational database construct.

Scientifically rigorous in-depth testing is not possible. Although desirable, those million-dollar tests are a bit beyond the resources of a poor man.

Presentation of Operation Domain :
      The great range of AxleBase abilities is sampled to present what is thought to be of greatest general public interest. Many of his abilities are not even mentioned on the web site.

Computer Buffers :
      The modern computer has data buffers that uncontrollably impact test results. Even the disk drives have data buffers. Buffers are circumvented where possible, and reboots are sometimes used to verify results with clear buffers.

Tuning For Tests :
      AxleBase has tuning options, but they were not used.
      Although it slows tests, YieldProcessor is left on so
          the user GUI can control test operations.
      Disks are defragged.

Queries :
      Industry-standard ANSII-92 SQL queries are used.
      To enable comparison and evaluation, reported queries are
          usally kept as simple as possible.
      A few complex queries are included merely to show off.
      All query types are done, but few are reported.

Data :
      Test data is generated by an automated system.
      See the data appendix.

Computers :
      See the computer appendix.
      Testing fancy hardware is not the objective, so
          AxleBase is run on commodity desktop computers.
      The cheapest used computers are bought when possible.

Test Dates And Schedules :
      Tests on this page are spread over years.
      Some test updates are long overdue due to limited resources.
      Obviously, there is no formal schedule.



End of Methodology appendix.

Click to return to table contents at the top
Or press Alt left arrow to return totext.






__________________________________________________
__________________________________________________
AxleBase Tests
Appendix
Data Table Metadata



Reference Numbers :
      Numbers in the left column are the reference numbers
          in the preceeding test tabulations.

Data Generation :
      Unless a test needs random, unique, datetimes, or specific
          values, most columns are filled by a set of literal values.
          Not only does this produce a known dataset, it
          also allows old equipment to generate large datasets in
          less than a year.
      See the Data appendix for more information.

Row Width :
      Row width is of little importance to a database manager.
      Therefore, these tests use moderate row sizes to allow
          construction of large tables for stress and speed tests.
      ( But if you doubt that, then see the Large Row test. )

Column Type :
      An AxleBase table can be configured to use either fixed or variable width.
      These tables are fixed width.
      But it matters little because the tests used indexed data.

Data Domain :
      The data domain concept was developed in the AxleBase lab to control data variables more tightly than can be done by the simpler datatype construct. It subsumes the datatype construct.

Data Table Tabulation
#table name
data type width data domain
1t_citizen_status
alpha 1 3 see data appendix
integer 20 6 see data appendix
alpha 16 7 see Shannon data theory
alpha 2 3 see data appendix
alpha 2 3 see data appendix
boolean 1 2 see data appendix
datetime 16 1 see data appendix
alpha 20 4 see data appendix
2 t_config
alpha 1 3 see data appendix
integer 20 6 see data appendix
alpha 10 4 see data appendix
datetime 16 1 see data appendix
alpha 50 3 see data appendix
alpha 50 3 see data appendix
3 t_large_row
alpha 1 3 see data appendix
integer 20 6 see data appendix
datetime 21 1 see data appendix
alpha 10 3 see data appendix
alpha 1,000,000 3 see data appendix
alpha 1,000,053 3 see data appendix
4 t_log
alpha 1 3 see data appendix
datetimex 21 1 see data appendix
alpha 1 3 see data appendix
alpha 10 3 see data appendix
alpha 10 4 see data appendix
alpha 100 3 see data appendix
5t_status_code
alpha 1 3 see data appendix
alpha 2 3 see data appendix
alpha 10 3 see data appendix
alpha 100 3 see data appendix
datetime 16 1 see data appendix
alpha 20 3 see data appendix
___________ ______ ________________



End of Data Table appendix.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Appendix
Column-Datatype Descriptions



Data source :
      All test data is generated by the AxHandle test app.
      AxHandle constructs each data row and hands it to AxleBase in a standard SQL insert statement.
      AxHandle tells AxleBase to generate current dates and times where needed.

Reference Numbers :
      Numbers in the left column of the following table are the reference numbers in the tables of test.

String Type :
      The AxleBase string type accepts any ASCII character that does not conflict with a system control code. However, for these tests, only printable characters are used.

BLOBS :
      Trivialities, such as storage and retrieval of large photographs and other BLOBs, are not in these tests.

Null Values :
      The repeating sets contain some nulls.

Row Targets :
      AxleBase can return a row or a row range by number.

The Shannon Limit :
      See Data theory.

Column Data Type Tabulation
# data type AxleBase type domain and morphology
1 date datetime The current date and time are entered as the row is created. May include null values.
2 boolean boolean A recurrent set of 50 literal values provides a controlled domain. May include null values.
3 alpha string A recurrent set of 50 literal values provides a controlled domain. May include null values.
4 alpha string Same as type 3 morphology.
    Additionally, a few specific unique values are inserted at known row locations during the table build for SQL tests.
5 alpha string Randomly generated characters.
6 alpha string Same as morphology 5.     Also inserted were unique values known to be outside the nomal population curve for the data. Those were sought to show fast times because most people understand none of this.
7 long integer serial A unique, serially assigned, long integer. Due to loss of administrative control of the large builds, the uniqueness and the sort were compromised numerous times yielding a mostly sorted almost unique set. (Apologies. My administrative abilities are terrible.)
8 alpha string See The Shannon Test .



End of Column Datatype appendix.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Appendix
Test Computers



Why are all the computers so old and slow ?
      To prove the power of AxleBase.
      Testing hardware speed is of no interest.
      For a smile, imagine any big-name database manager
          running these tests on these computers.

Computer Count :
      Tests are each run by a single computer.
      Other computers are used as file servers.
      The super-system tests, which use multiple computers, are
          in a separate and prominently labeled section.

Reference Numbers :
      Numbers in the left column are the reference numbers in
          the tables of test.

Missing data is caused by :
      Computer failure years before this table compilation.
      Or the computer was bought used.
      The Linux file server records are all lost.

Test Computer Tabulation
# mfg. year
mfg.
op. sys.cpu
qty
cpu
ghz
ram
gig
2IBM ? Win2k1 3 1
3eMachine2007Win2k1 3.1 2
4Dell 1999Win981 .4 .128
5home
made
2007Win2k1 1.8 2
6eMachine2007Win2k1 3.5 1
9Compaq 1998Win2k1 .796 .130
10eMachine2008Vista1 2.2 1
11Dell ? XP 1 2.4 2
12Gateway 2006Win2k1 3.2 3
13Gateway 2006Win2k1 3 3
14Gateway 2006Win2k1 3 3
15Gateway 2006Win2k1 3 3
16Gateway 2006Win2k1 3 3
17Gateway 2006Win2k1 3 3



End of Computer appendix.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Appendix
Misc. Equipment And Environment



This address:   jragan.com/axletests.htm#10.90.90

Network :
      Segmented, switched, gigabit network using IP lookup tables.
      Stack : SysLink / TCP / IP / ethernet / CAT V
      Some external disks are on USB 2.0 cables.

Computers :
      See preceeding computer appendix.

Storage :
      All storage is on mechanical hard disks.
      Disks are :
          Cheapest available regardless of speed.
          SATA and PATA.
          Mostly internal and some external SATA docks.
          External disks are on USB 2.0 cables.

Data :
      See preceeding data appendix.

Setup :
      Disks are defragmented.
      No special configuration.
      Although it slows performance, the AxleBase yieldProcessor
          toggle is turned on to allow GUI control.



End of Misc. Equipment appendix.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Appendix
Raw VLT Test Values
Standard Configuration



Run durations for queries against a billion row table.
With AxleBase configured as a standard database manager.
Test tabulation and technical descriptions can be seen HERE .

Building this table has spanned a decade. Unique query values inserted in recent years were well controlled. The early years were not well controlled so those values returned thousands of rows.

Table row is the row in which the target value is located. Row locations in recent years are precise. Approximations are indicated by tildes.



Alpha Data
Names In The Updater Column
seconds target value seg table rowfile
server
167.31 uma 3 ~75,000,000 2
162.50 ruth 641 ~16,217,300,000 5
160.45 kathy 794 ~20,088,200,000 5
159.98 natalie 959 ~24,262,700,000 6
160.00 lenore 999 ~25,261,997,001 6
163.31 traci 1,208 30,521,254,501 12
161.93 debra 1,447 36,571,265,001 12
159.18 brenda 1,487 37,583,925,001 12
23.81 anna 1,526 38,571,289,197 12
165.28 vivian 1,644 41,571,247,814 12
166.78 dierdra 1,723 43,571,252,427 12
167.35 rebecca 2,170 54,873,828,540 13
23.81 anna 2,454 61,973,828,520 14
144.79 monica 2,565 64,748,828,501 14
35.75 amber 3,035 76,500,513,500 15
155.50 cassie 3,057 77,052,721,000 15
162.29 nadine 3,182 80,174,688,000 16
28.76 ava 3,411 85,899,611,600 16
148.92 samantha 3,649 91,848,858,500 16
23:59 abigail 3,840 96,644,828,905 17

Total 2,541.29 / 20 = 127.06 seconds average per query.

( Anomaly :
      With no idea why the VLT join was faster than the VLT query, I apologize for my failure and for cessation of VLT research. They used the same table and the same hardware, so one would expect the complex join to be far slower. My surprise prompted multiple failed attempts (not shown) to reverse the disparity. I end the project with only theories about that problem that are unworthy of publication.
            JR )



Shannon data was developed specifically to stress AxleBase, because AxleBase is so unbelievably fast. It is discussed in the data theory.

Alpha (Shannon) Data
The Location Column
seconds target value table row file
server
28.00 Z0R[Uej6TVcqjJ1N 1 2
28.43 <;QCG;^>ifM=ZL?S 9,999 2
32.81 5:>2UkJ14Q06JEv@ 100,000,000 ?
233.64 s93683219587671 150,000,000 ?
26.20 s01234567890123 57,073,828,530 14
23.85 s02020202020202 61,973,828,520 14
180.96 s91919191919191 64,748,828,501 14
195.48 s12121212121212 70,027,390,100 15
26.95 s09876543219876 76,500,513,500 15
26.07 s01818181818181 77,052,721,000 15
204.93 s23232323232323 80,174,688,000 16
223.18 s34343434343434 85,398,865,800 16
204.01 s45454545454545 85,899,611,600 16
181.87 s56565656565656 91,848,858,500 16
190.93 s67676767676767 92,052,872,000 16
188.81 s11111111111111 95,049,263,702 17
196.37 s89898989898989 96,109,929,423 17
213.98 s46464646464646 96,421,830,600 17
197.14 s57575757575757 96,644,828,905 17
248.23 s55585558555855 100,258,828,501 17

Total 2851.84 / 20 = 142.59 seconds average per query.

Values chosen may look suspicious because they were chosen to test for various things that can arise in very large datasets, such as intra-boundary effects, that might produce anomalous behaviour in a database manager, and to insure that each is unigue in that vast dataset. They have no affect on the test performance.

The first three values were randomly generated. For the value "5:>2UkJ14Q06JEv@", 2,419 rows were returned, which is why randomly generated values should be avoided for target values in very large tests.



Numeric values were arbitrarily chosen, so the locations of values under fifty billion that are in the old part of the table may not be known.

Numeric Data
The Citizen_id Column
seconds value
27.98 1000
23.39 100000000000
22.87 1000000000
23.50 82999999999
23.03 53777777777
22.96 654829751
23.53 65482975100
28.32 100258828501
26.31 70027390100
23.20 37583925001
38.34 52520607501

Total 283.43 / 11 = 25.76 seconds average per query.



End of Raw Test Results, standard configuration.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.






__________________________________________________
__________________________________________________
AxleBase Tests
Appendix
Raw VLT Test Values
Axsys Super-System Configuration



Run durations for queries against a 100 billion row table.
With AxleBase configured as an axsys super-server.
Test tabulation and technical descriptions can be seen HERE .

Building this table has spanned a decade. Unique query values inserted in recent years were well controlled. The early years were not well controlled so those values returned thousands of rows.

Table row is the row in which the target value is located. Rows in recent years are precise. Approximations are indicated by tildes.



Alpha Data
Names In The Updater Column
seconds target value seg table rowfile
server
9.70 uma 3 ~75,000,000 2
8.75 ruth 641 ~16,217,300,000 5
5.45 kathy 794 ~20,088,200,000 5
10.00 natalie 959 ~24,262,700,000 6
8.62 lenore 999 ~25,261,997,001 6
4.35 traci 1,208 30,521,254,501 12
10.11 debra 1,447 36,571,265,001 12
8.94 brenda 1,487 37,583,925,001 12
2.61 anna 1,526 38,571,289,197 12
6.42 vivian 1,644 41,571,247,814 12
5.38 dierdra 1,723 43,571,252,427 12
11.77 rebecca 2,170 54,873,828,540 13
2.61 anna 2,454 61,973,828,520 14
7.76 monica 2,565 64,748,828,501 14
2.13 amber 3,035 76,500,513,500 15
9.99 cassie 3,057 77,052,721,000 15
9.26 nadine 3,182 80,174,688,000 16
2.58 ava 3,411 85,899,611,600 16
10.30 samantha 3,649 91,848,858,500 16
2.49 abigail 3,840 96,644,828,905 17

Total 139.22 / 20 = 6.96 seconds average per query.



Shannon data was developed specifically to stress AxleBase, because AxleBase is so unbelievably fast. It is discussed in the data theory.

Alpha (Shannon) Data
The Location Column
seconds target value table rowfile
server
11.56 Z0R[Uej6TVcqjJ1N 1 2
2.10 <;QCG;^>ifM=ZL?S 9,999 2
2.61 5:>2UkJ14Q06JEv@ 100,000,000 ?
28.51 s93683219587671 150,000,000 ?
2.63 s01234567890123 57,073,828,530 14
2.69 s02020202020202 61,973,828,520 14
22.11 s91919191919191 64,748,828,501 14
25.31 s12121212121212 70,027,390,100 15
2.72 s09876543219876 76,500,513,500 15
2.11 s01818181818181 77,052,721,000 15
23.01 s23232323232323 80,174,688,000 16
22.80 s34343434343434 85,398,865,800 16
18.95 s45454545454545 85,899,611,600 16
23.31 s56565656565656 91,848,858,500 16
20.90 s67676767676767 92,052,872,000 16
22.62 s11111111111111 95,049,263,702 17
25.39 s89898989898989 96,109,929,423 17
15.98 s46464646464646 96,421,830,600 17
23.41 s57575757575757 96,644,828,905 17
23.63 s55585558555855 100,258,828,501 17

Total 322.35 / 20 = 16.12 seconds average per query.

Values chosen may look suspicious because they were chosen to test for various things that can arise in very large datasets, such as intra-boundary effects, that might produce anomalous behaviour in a database manager, and to insure that each is unigue in that vast dataset. They have no affect on the test performance.

The first three values were randomly generated. For the value "5:>2UkJ14Q06JEv@", 2,419 rows were returned, which is why randomly generated values should be avoided for target values in very large tests.



Numeric values in the table are all unique, so test values were arbitrarily chosen without regard to location, so the locations were not ascertained.

Numeric Data
The Citizen_id Column
seconds value table rowfile
server
3.87 1000    
2.76 100000000000    
2.11 1000000000    
2.67 82999999999    
2.79 53777777777    
2.15 654829751    
2.69 65482975100    
2.58 100258828501    
2.76 70027390100    
2.78 37583925001    
2.83 52520607501    
2.15 9999999999    
2.54 99999999999    
2.69 86546841355    
2.13 834132    
2.14 1554289812    
2.17 35189218    
2.18 8125465456    
2.72 81254654560    
2.12 26579    

Total 49.13 / 20 = 2.46 seconds average per query.



Super-System Imbalance

The data load was unbalanced. The primary objective was to build a hundred-billion row table, and to minimize hardware loss from years of that 24x7 pounding. Therefore, data movement after build was resisted.

Performance of the axsys super-system was set by machines 12 thru 16 because they contained so much of the data; queries ran the longest on them. They each contain two disks, so they can be unloaded if more machines are found.

The following table shows the data load on each computer.

Data Load Per Computer
percentage computer row count
8 2 7,569,633,500
7 3 6,962,037,500
7 5 7,594,950,000
6 6 6,322,162,000
1 11 1,262,343,500
13 12 12,657,617,000
13 13 12,655,085,000
12 14 12,500,000,000
13 15 12,625,000,000
12 16 12,500,000,000
8 17 7,625,000,000



End of Raw Test Results, axsys configuration.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.




__________________________________________________
__________________________________________________
AxleBase Tests
Project Status
__________________________________________________
__________________________________________________




2003 :   Begun.
2006 :   Operational.
              Improvements and tests continued
              and attempts were made to sell it.
2016 :   Stopped.
              The project surpassed all goals,
              and gave me years of fun and lessons,
              so research, development, and testing stopped.
             

( I make changes in the web site when an error is noticed in this massive, many-years project. A fact that I frequently forget is that, many years later, AxleBase still serves my needs and supports research in other areas.)



End of Project Status appendix.

Click to return to table of contents at the top
Or press Alt left arrow to return to text.


__________________________________________________





                                                       



Web site and technology
Copyright 2003 - 2023 John Ragan

Web site is maintained with Notepad.
By intent, so don't bother me about it.