#include <NdbDictionary.hpp>
Inheritance diagram for NdbDictionary::Table:
TableSize
When calculating the data storage one should add the size of all attributes (each attributeconsumes at least 4 bytes) and also an overhead of 12 byte. Variable size attributes (not supported yet) will have a size of 12 bytes plus the actual data storage parts where there is an additional overhead based on the size of the variable part.
An example table with 5 attributes: one 64 bit attribute, one 32 bit attribute, two 16 bit attributes and one array of 64 8 bits. This table will consume 12 (overhead) + 8 + 4 + 2*4 (4 is minimum) + 64 = 96 bytes per record. Additionally an overhead of about 2 % as page headers and waste should be allocated. Thus, 1 million records should consume 96 MBytes plus the overhead 2 MByte and rounded up to 100 000 kBytes.
NdbDictionary::Table::Table | ( | const char * | name = "" |
) |
Constructor
name | Name of table |
NdbDictionary::Table::Table | ( | const Table & | table | ) |
Copy constructor
table | Table to be copied |
const char* NdbDictionary::Table::getName | ( | ) | const |
Get table name
int NdbDictionary::Table::getTableId | ( | ) | const |
Get table id
const Column* NdbDictionary::Table::getColumn | ( | const char * | name | ) | const |
Get column definition via name.
Column* NdbDictionary::Table::getColumn | ( | const int | attributeId | ) |
Get column definition via index in table.
Column* NdbDictionary::Table::getColumn | ( | const char * | name | ) |
Get column definition via name.
const Column* NdbDictionary::Table::getColumn | ( | const int | attributeId | ) | const |
Get column definition via index in table.
bool NdbDictionary::Table::getLogging | ( | ) | const |
If set to false, then the table is a temporary table and is not logged to disk.
In case of a system restart the table will still be defined and exist but will be empty. Thus no checkpointing and no logging is performed on the table.
The default value is true and indicates a normal table with full checkpointing and logging activated.
FragmentType NdbDictionary::Table::getFragmentType | ( | ) | const |
Get fragmentation type
int NdbDictionary::Table::getKValue | ( | ) | const |
Get KValue (Hash parameter.) Only allowed value is 6. Later implementations might add flexibility in this parameter.
int NdbDictionary::Table::getMinLoadFactor | ( | ) | const |
Get MinLoadFactor (Hash parameter.) This value specifies the load factor when starting to shrink the hash table. It must be smaller than MaxLoadFactor. Both these factors are given in percentage.
int NdbDictionary::Table::getMaxLoadFactor | ( | ) | const |
Get MaxLoadFactor (Hash parameter.) This value specifies the load factor when starting to split the containers in the local hash tables. 100 is the maximum which will optimize memory usage. A lower figure will store less information in each container and thus find the key faster but consume more memory.
int NdbDictionary::Table::getNoOfColumns | ( | ) | const |
Get number of columns in the table
int NdbDictionary::Table::getNoOfPrimaryKeys | ( | ) | const |
Get number of primary keys in the table
const char* NdbDictionary::Table::getPrimaryKey | ( | int | no | ) | const |
Get name of primary key
bool NdbDictionary::Table::equal | ( | const Table & | ) | const |
Check if table is equal to some other table
const void* NdbDictionary::Table::getFrmData | ( | ) | const |
Get frm file stored with this table
Assignment operator, deep copy
table | Table to be copied |
void NdbDictionary::Table::setName | ( | const char * | name | ) |
Name of table
name | Name of table |
void NdbDictionary::Table::addColumn | ( | const Column & | ) |
Add a column definition to a table
void NdbDictionary::Table::setLogging | ( | bool | ) |
void NdbDictionary::Table::setFragmentType | ( | FragmentType | ) |
Set fragmentation type
void NdbDictionary::Table::setKValue | ( | int | kValue | ) |
Set KValue (Hash parameter.) Only allowed value is 6. Later implementations might add flexibility in this parameter.
void NdbDictionary::Table::setMinLoadFactor | ( | int | ) |
Set MinLoadFactor (Hash parameter.) This value specifies the load factor when starting to shrink the hash table. It must be smaller than MaxLoadFactor. Both these factors are given in percentage.
void NdbDictionary::Table::setMaxLoadFactor | ( | int | ) |
Set MaxLoadFactor (Hash parameter.) This value specifies the load factor when starting to split the containers in the local hash tables. 100 is the maximum which will optimize memory usage. A lower figure will store less information in each container and thus find the key faster but consume more memory.
Object::Type NdbDictionary::Table::getObjectType | ( | ) | const |
Get table object type
virtual Object::Status NdbDictionary::Table::getObjectStatus | ( | ) | const [virtual] |
Get object status
Implements NdbDictionary::Object.
virtual int NdbDictionary::Table::getObjectVersion | ( | ) | const [virtual] |
Get object version
Implements NdbDictionary::Object.
void NdbDictionary::Table::setFrm | ( | const void * | data, | |
Uint32 | len | |||
) |
Set frm file to store with this table
void NdbDictionary::Table::setObjectType | ( | Object::Type | type | ) |
Set table object type
void NdbDictionary::Table::setMaxRows | ( | Uint64 | maxRows | ) |
Set/Get Maximum number of rows in table (only used to calculate number of partitions).
void NdbDictionary::Table::setMinRows | ( | Uint64 | minRows | ) |
Set/Get Minimum number of rows in table (only used to calculate number of partitions).