Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Database \ mysqldump issue a note     - GoldenGate for Oracle data consistency initializing (Database)

- Smack 4.1.x Upgrade Guide (Linux)

- Getting Started with Linux: CentOS7 Install Mono with the yum command (Server)

- Network security system (Network)

- Four IDS intrusion detection tool under Linux environment (Linux)

- sudo command scenario analysis (Linux)

- CentOS 5.10 installed Oracle 11G R2 (Database)

- Regular expressions in Perl (Programming)

- Build their own recursive DNS server (Server)

- Java uses JDBC connect database (Programming)

- Into the Java keyword instanceof (Programming)

- C ++ constant definition (Programming)

- Linux kernel source tree to establish load module hello (Linux)

- Hadoop 0.23 compile common errors (Server)

- Sublime Text 3 using summary (Linux)

- Linux environment to configure Apache + Django + wsgi (Server)

- Oracle Data Pump Example (Database)

- To install Gitolite in Ubuntu / Fedora / CentOS (Linux)

- Hackers is how the invasion and control of Things devices? (Linux)

- Git use and interpretation of common commands (Linux)

 
         
  mysqldump issue a note
     
  Add Date : 2018-11-21      
         
         
         
  Recently some students use mysqldump to export data, export data found offline environment stuck and cause mysql busy! Meanwhile dump being given as follows:

mysqldump: Could not execute 'SELECT DISTINCT TABLESPACE_NAME, FILE_NAME, LOGFILE_GROUP_NAME,
EXTENT_SIZE, INITIAL_SIZE, ENGINE FROM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE'
 AND TABLESPACE_NAME IN (SELECT DISTINCT TABLESPACE_NAME FROM INFORMATION_SCHEMA.PARTITIONS
  WHERE TABLE_SCHEMA IN ( 'dbname')) ORDER BY TABLESPACE_NAME, LOGFILE_GROUP_NAME

 So this statement was explain, explain information is found as follows:

+ ---- + ------ + ------------ + ------------- + ---------- -------------- + ----- + --------- + ------ + ------ + ----- -------------------------------------------------- -------------------------------------------------- ------------- +
| Id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+ ---- + ------ + ------------ + ------------- + ---------- -------------- + ----- + --------- + ------ + ------ + ----- -------------------------------------------------- -------------------------------------------------- ------------- +
| 1 | SIMPLE | FILES | ALL | NULL | NULL | NULL | NULL | NULL | Using where; Using temporary; Using filesort |
| 1 | SIMPLE | PARTITIONS | ALL | NULL | TABLE_SCHEMA | NULL | NULL | NULL | Using where; Open_full_table; Scanned 1 database; Distinct; FirstMatch (FILES); Using join buffer (Block Nested Loop) |
+ ---- + ------ + ------------ + ------------- + ---------- -------------- + ----- + --------- + ------ + ------ + ----- -------------------------------------------------- -------------------------------------------------- ------------- +
2 rows in set (0.01 sec)
You can see the sql will open a library, then open all the tables. We know, information.files table is a memory engine, data tables are in fact does not exist, so information on these tables need to get through all of the physical file.

mysql> show create table INFORMATION_SCHEMA.FILES G
*************************** 1. row ******************** *******
       Table: FILES
Create Table: CREATE TEMPORARY TABLE `FILES` (
  `FILE_ID` bigint (4) NOT NULL DEFAULT '0',
  `FILE_NAME` varchar (64) DEFAULT NULL,
  `FILE_TYPE` varchar (20) NOT NULL DEFAULT '',
  `TABLESPACE_NAME` varchar (64) DEFAULT NULL,
  `TABLE_CATALOG` varchar (64) NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar (64) DEFAULT NULL,
  `TABLE_NAME` varchar (64) DEFAULT NULL,
  `LOGFILE_GROUP_NAME` varchar (64) DEFAULT NULL,
  `LOGFILE_GROUP_NUMBER` bigint (4) DEFAULT NULL,
  `ENGINE` varchar (64) NOT NULL DEFAULT '',
  `FULLTEXT_KEYS` varchar (64) DEFAULT NULL,
  `DELETED_ROWS` bigint (4) DEFAULT NULL,
  `UPDATE_COUNT` bigint (4) DEFAULT NULL,
  `FREE_EXTENTS` bigint (4) DEFAULT NULL,
  `TOTAL_EXTENTS` bigint (4) DEFAULT NULL,
  `EXTENT_SIZE` bigint (4) NOT NULL DEFAULT '0',
  `INITIAL_SIZE` bigint (21) unsigned DEFAULT NULL,
  `MAXIMUM_SIZE` bigint (21) unsigned DEFAULT NULL,
  `AUTOEXTEND_SIZE` bigint (21) unsigned DEFAULT NULL,
  `CREATION_TIME` datetime DEFAULT NULL,
  `LAST_UPDATE_TIME` datetime DEFAULT NULL,
  `LAST_ACCESS_TIME` datetime DEFAULT NULL,
  `RECOVER_TIME` bigint (4) DEFAULT NULL,
  `TRANSACTION_COUNTER` bigint (4) DEFAULT NULL,
  `VERSION` bigint (21) unsigned DEFAULT NULL,
  `ROW_FORMAT` varchar (20) DEFAULT NULL,
  `TABLE_ROWS` bigint (21) unsigned DEFAULT NULL,
  `AVG_ROW_LENGTH` bigint (21) unsigned DEFAULT NULL,
  `DATA_LENGTH` bigint (21) unsigned DEFAULT NULL,
  `MAX_DATA_LENGTH` bigint (21) unsigned DEFAULT NULL,
  `INDEX_LENGTH` bigint (21) unsigned DEFAULT NULL,
  `DATA_FREE` bigint (21) unsigned DEFAULT NULL,
  `CREATE_TIME` datetime DEFAULT NULL,
  `UPDATE_TIME` datetime DEFAULT NULL,
  `CHECK_TIME` datetime DEFAULT NULL,
  `CHECKSUM` bigint (21) unsigned DEFAULT NULL,
  `STATUS` varchar (20) NOT NULL DEFAULT '',
  `EXTRA` varchar (255) DEFAULT NULL
) ENGINE = MEMORY DEFAULT CHARSET = utf8
Depth source, discovered the role of mysqldump is the sql dump tablespace, take a look at the source code of the part

{ "All-tablespaces", 'Y',
   "Dump all the tablespaces.",
   & Opt_alltspcs, & opt_alltspcs, 0, GET_BOOL, NO_ARG, 0, 0, 0, 0,
   0, 0},
   
    { "No-tablespaces", 'y',
   "Do not dump any tablespace information.",
   & Opt_notspcs, & opt_notspcs, 0, GET_BOOL, NO_ARG, 0, 0, 0, 0,
   0, 0},
   
   
 if (opt_alltspcs) // If you choose to dump all of the tablespace information
    dump_all_tablespaces ();
   
 if (opt_alldbs) // If you dump all of the db
  {
    if (! opt_alltspcs &&! opt_notspcs) // If no dump all tablesapce, is also not specified not dump all tablespce
      dump_all_tablespaces ();
    dump_all_databases ();
  }
 else
  {
    // Check all arguments meet length condition. Currently database and table
    // Names are limited to NAME_LEN bytes and stack-based buffers assumes
    // That escaped name will be not longer than NAME_LEN * 2 + 2 bytes long.
    int argument;
    for (argument = 0; argument     {
      size_t argument_length = strlen (argv [argument]);
      if (argument_length> NAME_LEN)
      {
        die (EX_CONSCHECK, "[ERROR] Argument '% s' is too long, it can not be"
          "Name for any table or database n.", Argv [argument]);
      }
    }
    if (argc> 1 &&! opt_databases)
    {
      / * Only one database and selected table (s) * /
      if (! opt_alltspcs &&! opt_notspcs)
        dump_tablespaces_for_tables (* argv, (argv + 1), (argc - 1));
      dump_selected_tables (* argv, (argv + 1), (argc - 1));
    }
    else
    {
      / * One or more databases, all tables * /
      if (! opt_alltspcs &&! opt_notspcs)
        dump_tablespaces_for_databases (argv);
      dump_databases (argv);
    }
  }
We have seen, if not dump all the tablespace specified nor specified tablespace does not dump all the information, all of the default dump tablespace information! ! !
We continue to look at the document, the description of the tablespace:

--all-tablespaces, -Y
Adds to a table dump all SQL statements needed to create any tablespaces used by an NDB table. This information is not otherwise included in the output from mysqldump. This option is currently relevant only to MySQL Cluster tables, which are not supported in MySQL 5.7.
Tablespace can see this information only in the presence of only NDB cluster, we are non-NDB online, so when we dump the data does not need this information.
So turn off the display of all on ok!

-y, --no-tablespaces
                      Do not dump any tablespace information.
     
         
         
         
  More:      
 
- Installation Elementary OS Freya 20 things to do (Linux)
- Guide: Trickle restrict application bandwidth usage (Linux)
- Create a DLL using MinGW and Attention (Programming)
- Android components series Activity (Programming)
- Nginx1.8 version upgrade method AMH4.2 Free manually compile (Server)
- Linux common network tools: batch scanning of nmap hosting service (Linux)
- Linux FTP setting file description (Server)
- JDK tools jstat (Linux)
- Linux disk partition batch format mount script (Linux)
- Detailed Linux platform chip programming process (Programming)
- Linux set the maximum number of open files nofile and nr_open, file-max Description (Linux)
- Linux ln command - create a file / directory link (Linux)
- Timing Nginx logs cut and remove the log records of the specified number of days before (Server)
- Oracle RMAN backups of the control file backup (Database)
- Hadoop2.6.3 build clusters and the development of MapReduce WIN7 by Eclipse on Linux demo (Server)
- Zabbix configuration of SNMP (Server)
- Wireless LAN security solutions (Linux)
- MongoDB learning Notes (2) basic connection example of -Nodejs and MongoDB (Database)
- Oracle PL / SQL based cursor (Database)
- Configuring a Linux operating system against syn attack (Linux)
     
           
     
  CopyRight 2002-2022 newfreesoft.com, All Rights Reserved.