apache-ignite

Форк
0
662 строки · 25.7 Кб
1
// Licensed to the Apache Software Foundation (ASF) under one or more
2
// contributor license agreements.  See the NOTICE file distributed with
3
// this work for additional information regarding copyright ownership.
4
// The ASF licenses this file to You under the Apache License, Version 2.0
5
// (the "License"); you may not use this file except in compliance with
6
// the License.  You may obtain a copy of the License at
7
//
8
// http://www.apache.org/licenses/LICENSE-2.0
9
//
10
// Unless required by applicable law or agreed to in writing, software
11
// distributed under the License is distributed on an "AS IS" BASIS,
12
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
// See the License for the specific language governing permissions and
14
// limitations under the License.
15

16
= Data Definition Language (DDL)
17

18
:toclevels:
19

20
This page encompasses all data definition language (DDL) commands supported by Ignite.
21

22
== CREATE TABLE
23

24
The command creates a new Ignite cache and defines a SQL table on top of it. The underlying cache stores the data in
25
the form of key-value pairs while the table allows processing the data with SQL queries.
26

27
The table will reside in the link:SQL/schemas[Schema] specified in the connection parameters. If no schema is specified,
28
the `PUBLIC` will be used as a default.
29

30
The `CREATE TABLE` command is synchronous. Moreover, it blocks the execution of other DDL commands that are issued before the
31
`CREATE TABLE` command has finished execution. The execution of DML commands is not affected and can be performed in parallel.
32

33
[source,sql]
34
----
35
CREATE TABLE [IF NOT EXISTS] tableName (tableColumn [, tableColumn]...
36
[, PRIMARY KEY (columnName [,columnName]...)])
37
[WITH "paramName=paramValue [,paramName=paramValue]..."]
38

39
tableColumn := columnName columnType [DEFAULT defaultValue] [PRIMARY KEY]
40
----
41

42

43
Parameters:
44

45
* `tableName` - name of the table.
46
* `tableColumn` - name and type of a column to be created in the new table.
47
* `columnName` - name of a previously defined column.
48
* `DEFAULT` - specifies a default value for the column. Only constant values are accepted.
49
* `IF NOT EXISTS` - create the table only if a table with the same name does not exist.
50
* `PRIMARY KEY` - specifies a primary key for the table that can consist of a single column or multiple columns.
51
* `WITH` - accepts additional parameters not defined by ANSI-99 SQL:
52

53
** `TEMPLATE=<cache's template name>` - case-sensitive​ name of a link:configuring-caches/configuration-overview#cache-templates[cache template]. A template is an instance of the `CacheConfiguration` class registered by calling `Ignite.addCacheConfiguration()`. Use predefined `TEMPLATE=PARTITIONED` or `TEMPLATE=REPLICATED` templates to create the cache with the corresponding replication mode. The rest of the parameters will be those that are defined in the `CacheConfiguration` object. By default, `TEMPLATE=PARTITIONED` is used if the template is not specified explicitly.
54
** `BACKUPS=<number of backups>` - sets the number of link:configuring-caches/configuring-backups[partition backups]. If neither this nor the `TEMPLATE` parameter is set, then the cache is created with `0` backup copies.
55
** `ATOMICITY=<ATOMIC | TRANSACTIONAL>` - sets link:key-value-api/transactions[atomicity mode] for the underlying cache. If neither this nor the `TEMPLATE` parameter is set, then the cache is created with the `ATOMIC` mode enabled.
56
** `WRITE_SYNCHRONIZATION_MODE=<PRIMARY_SYNC | FULL_SYNC | FULL_ASYNC>` -
57
sets the write synchronization mode for the underlying cache. If neither this nor the `TEMPLATE` parameter is set, then the cache is created with `FULL_SYNC` mode enabled.
58
** `CACHE_GROUP=<group name>` - specifies the link:configuring-caches/cache-groups[group name] the underlying cache belongs to.
59
** `AFFINITY_KEY=<affinity key column name>` - specifies an link:data-modeling/affinity-collocation[affinity key] name which is a column of the `PRIMARY KEY` constraint.
60
** `CACHE_NAME=<custom name of the new cache>` - the name of the underlying cache created by the command,
61
or the `SQL_{SCHEMA_NAME}_{TABLE}` format will be used if the parameter not specified.
62
** `DATA_REGION=<existing data region name>` - name of the link:memory-configuration/data-regions[data region] where table entries should be stored. By default, Ignite stores all the data in a default region.
63
** `PARALLELISM=<number of SQL execution threads>` - SQL queries are executed by a single thread on each node by default, but certain scenarios can benefit from multi-threaded execution, see link:perf-and-troubleshooting/sql-tuning#query-parallelism[Query Parallelism] for details.
64
** `KEY_TYPE=<custom name of the key type>` - sets the name of the custom key type that is used from the key-value APIs in Ignite. The name should correspond to a Java, .NET, or C++ class, or it can be a random one if link:data-modeling/data-modeling#binary-object-format[BinaryObjects] is used instead of a custom class. The number of fields and their types in the custom key type has to correspond to the `PRIMARY KEY`. Refer to the <<Use non-SQL API>> section below for more details.
65
** `VALUE_TYPE=<custom name of the value type of the new cache>` - sets the name of a custom value type that is used from the key-value and other non-SQL APIs in Ignite. The name should correspond to a Java, .NET, or C++ class, or it can be a random one if
66
link:data-modeling/data-modeling#binary-object-format[BinaryObjects] is used instead of a custom class. The value type should include all the columns defined in the CREATE TABLE command except for those listed in the `PRIMARY KEY` constraint. Refer to the <<Use non-SQL API>> section below for more details.
67
Also, the same `VALUE_TYPE` is required to use SQL queries over data replicated with link:extensions-and-integrations/change-data-capture-extensions[CDC].
68
** `WRAP_KEY=<true | false>` - this flag controls whether a _single column_ `PRIMARY KEY` should be wrapped in the link:data-modeling/data-modeling#binary-object-format[BinaryObjects] format or not. By default, this flag is set to false. This flag does not have any effect on the `PRIMARY KEY` with multiple columns; it always gets wrapped regardless of the value of the parameter.
69
** `WRAP_VALUE=<true | false>` - this flag controls whether a single column value of a primitive type should be wrapped in the link:data-modeling/data-modeling#binary-object-format[BinaryObjects] format or not. By default, this flag is set to true. This flag does not have any effect on the value with multiple columns; it always gets wrapped regardless of the value of the parameter. Set this parameter to false if you have a single column value and do not plan to add additional columns to the table. Note that once the parameter is set to false, you can't use the `ALTER TABLE ADD COLUMN` command for this specific table.
70

71

72
Read more about the database architecture on the link:SQL/sql-introduction[SQL Introduction] page.
73

74

75
=== Define Primary Key
76

77
The example below shows how to create a table with `PRIMARY KEY` specified in the column definition and override cache
78
related parameters. A new distributed cache `SQL_PUBLIC_PERSON` will be created (the `SQL_{SCHEMA_NAME}_{TABLE}` format
79
is used for naming) which stores objects of the `Person` type that corresponds to a specific Java, .NET, C++ class or BinaryObject.
80

81
The distributed cache related parameters are passed in the `WITH` clause of the statement. If the `WITH` clause is omitted,
82
then the cache will be created with default parameters set in the `CacheConfiguration` object.
83

84
[source,sql]
85
----
86
CREATE TABLE Person (
87
  id int PRIMARY KEY,
88
  city_id int,
89
  name varchar,
90
  age int,
91
  company varchar
92
) WITH "atomicity=transactional,cachegroup=somegroup";
93
----
94

95

96
=== Use non-SQL API
97

98
If you wish to access the table data by the key-value or other non-SQL API, then you might be need to set the `CACHE_NAME` and
99
`KEY_TYPE`, `VALUE_TYPE` parameters corresponding to your business model objects to make non-SQL APIs usage more convenient.
100

101
- Use the `CACHE_NAME` parameter to override the default name with the following format `SQL_{SCHEMA_NAME}_{TABLE}`.
102
- By default, the command also creates two new binary types - for the key and value respectively. Ignite in turn generates
103
the names of the types randomly including a UUID string which complicates the usage of these types from a non-SQL API.
104

105
The example below shows how to create a table `PERSON` and the underlying cache with the same name. The cache will store objects
106
of the `Person` type with explicitly defined the key type `PersonKey` and value type `PersonValue`. The `PRIMARY KEY` columns will
107
be used as the object's key, the rest of the columns will belong to the value.
108

109
[source,sql]
110
----
111
CREATE TABLE IF NOT EXISTS Person (
112
  id int,
113
  city_id int,
114
  name varchar,
115
  age int,
116
  company varchar,
117
  PRIMARY KEY (id, city_id)
118
) WITH "template=partitioned,backups=1,affinity_key=city_id,CACHE_NAME=Person,KEY_TYPE=PersonKey,VALUE_TYPE=PersonValue";
119
----
120

121

122
=== Use non-Upper Case Columns
123

124
Ignite parses all unquoted identifiers, names of a table columns and converts them to uppercase
125
during the `CREATE TABLE` command execution which, in turn, makes the command with explicitly defined key
126
and value types a bit more challenging.
127

128
There are a few options that might help you to deal with such a case:
129

130
* Use link:SQL/sql-api[QuerySqlField] annotation. This will prevent checking the field non-UpperCase each time because of
131
an alias for the column is created each time the `CREATE TABLE` command being executed.
132
* Keeping in mind that column names converted each time to the upper case by default, you have to be sure that DDL fields
133
and cache type fields are always match the letters case.
134

135
In the example below you can see the usage of quotes for the `affKey` CamelCase field in the `CREATE TABLE` command with
136
matching of the same field in the `PersonKey` cache key type.
137

138
[source,sql]
139
----
140
CREATE TABLE IF NOT EXISTS Person (
141
  id INT,
142
  "affKey" INT,
143
  val VARCHAR,
144
  PRIMARY KEY (id, "affKey")
145
) WITH "template=partitioned,backups=1,affinity_key=affKey,CACHE_NAME=Person,KEY_TYPE=PersonKey,VALUE_TYPE=PersonValue";
146
----
147

148
[source,java]
149
----
150
class PersonKey {
151
    private int id;
152

153
    /*
154
     * This is a camel case field 'affKey' must match the DDL table schema, so you must be sure:
155
     * - Using the quoted "affKey" field name in the DDL table definition;
156
     * - Convert the 'affKey' field to the upper case 'AFFKEY' to match the DDL table definition;
157
     */
158
    @AffinityKeyMapped
159
    private int affKey;
160

161
    public PersonKey(int id, int affKey) {
162
        this.id = id;
163
        this.affKey = affKey;
164
    }
165
}
166
----
167

168
Note that some integrations with the Apache Ignite like the link:extensions-and-integrations/spring/spring-data[Spring Data]
169
`CrudRepository` doesn't support the quoted fields to access the data.
170

171

172
== ALTER TABLE
173

174
Modify the structure of an existing table.
175

176
[source,sql]
177
----
178
ALTER TABLE [IF EXISTS] tableName {alter_specification}
179

180
alter_specification:
181
    ADD [COLUMN] {[IF NOT EXISTS] tableColumn | (tableColumn [,...])}
182
  | DROP [COLUMN] {[IF EXISTS] columnName | (columnName [,...])}
183
  | {LOGGING | NOLOGGING}
184

185
tableColumn := columnName columnType
186
----
187

188
[NOTE]
189
====
190
[discrete]
191
=== Scope of ALTER TABLE
192
Presently, Ignite only supports addition and removal of columns.
193
====
194

195
Parameters:
196

197
- `tableName` - the name of the table.
198
- `tableColumn` - the name and type of the column to be added to the table.
199
- `columnName` - the name of the column to be added or removed.
200
- `IF EXISTS` - if applied to TABLE, do not throw an error if a table with the specified table name does not exist. If applied to COLUMN, do not throw an error if a column with the specified name does not exist.
201
- `IF NOT EXISTS` - do not throw an error if a column with the same name already exists.
202
- `LOGGING` - enable link:persistence/native-persistence#write-ahead-log[write-ahead logging] for the table. Write-ahead logging in enabled by default. The command is relevant only if Ignite persistence is used.
203
- `NOLOGGING` - disable write-ahead logging for the table. The command is relevant only if Ignite persistence is used.
204

205

206
`ALTER TABLE ADD` adds a new column or several columns to a previously created table. Once a column is added, it can be accessed using link:sql-reference/dml[DML commands] and indexed with the <<CREATE INDEX>> statement.
207

208
`ALTER TABLE DROP` removes an existing column or multiple columns from a table. Once a column is removed, it cannot be accessed within queries. Consider the following notes and limitations:
209

210
- The command does not remove actual data from the cluster which means that if the column 'name' is dropped, the value of the 'name' is still stored in the cluster. This limitation is to be addressed in the next releases.
211
- If the column was indexed, the index has to be dropped manually using the 'DROP INDEX' command.
212
- It is not possible to remove a column that is a primary key or a part of such a key.
213
- It is not possible to remove a column if it represents the whole value stored in the cluster. The limitation is relevant for primitive values.
214
Ignite stores data in the form of key-value pairs and all the new columns will belong to the value. It's not possible to change a set of columns of the key (`PRIMARY KEY`).
215

216
Both DDL and DML commands targeting the same table are blocked for a short time until `ALTER TABLE` is in progress.
217

218
Schema changes applied by this command are persisted on disk if link:persistence/native-persistence[Ignite persistence] is enabled. Thus, the changes can survive full cluster restarts.
219

220

221
Examples:
222

223
Add a column to the table:
224

225
[source,sql]
226
----
227
ALTER TABLE Person ADD COLUMN city varchar;
228
----
229

230

231
Add a new column to the table only if a column with the same name does not exist:
232

233
[source,sql]
234
----
235
ALTER TABLE City ADD COLUMN IF NOT EXISTS population int;
236
----
237

238

239
Add a column​ only if the table exists:
240

241
[source,sql]
242
----
243
ALTER TABLE IF EXISTS Missing ADD number long;
244
----
245

246

247
Add several columns to the table at once:
248

249

250
[source,sql]
251
----
252
ALTER TABLE Region ADD COLUMN (code varchar, gdp double);
253
----
254

255

256
Drop a column from the table:
257

258

259
[source,sql]
260
----
261
ALTER TABLE Person DROP COLUMN city;
262
----
263

264

265
Drop a column from the table only if a column with the same name does exist:
266

267

268
[source,sql]
269
----
270
ALTER TABLE Person DROP COLUMN IF EXISTS population;
271
----
272

273

274
Drop a column only if the table exists:
275

276

277
[source,sql]
278
----
279
ALTER TABLE IF EXISTS Person DROP COLUMN number;
280
----
281

282

283
Drop several columns from the table at once:
284

285

286
[source,sql]
287
----
288
ALTER TABLE Person DROP COLUMN (code, gdp);
289
----
290

291

292
Disable write-ahead logging:
293

294

295
[source,sql]
296
----
297
ALTER TABLE Person NOLOGGING
298
----
299

300

301
== DROP TABLE
302

303
The `DROP TABLE` command drops an existing table.
304
The underlying cache with all the data in it is destroyed, too.
305

306

307
[source,sql]
308
----
309
DROP TABLE [IF EXISTS] tableName
310
----
311

312
Parameters:
313

314
- `tableName` - the name of the table.
315
- `IF NOT EXISTS` - do not throw an error if a table with the same name does not exist.
316

317

318
Both DDL and DML commands targeting the same table are blocked while the `DROP TABLE` is in progress.
319
Once the table is dropped, all pending commands will fail with appropriate errors.
320

321
Schema changes applied by this command are persisted on disk if link:persistence/native-persistence[Ignite persistence] is enabled. Thus, the changes can survive full cluster restarts.
322

323
Examples:
324

325
Drop Person table if the one exists:
326

327
[source,sql]
328
----
329
DROP TABLE IF EXISTS "Person";
330
----
331

332
== CREATE INDEX
333

334
Create an index on the specified table.
335

336
[source,sql]
337
----
338
CREATE [SPATIAL] INDEX [[IF NOT EXISTS] indexName] ON tableName
339
    (columnName [ASC|DESC] [,...]) [(index_option [...])]
340

341
index_option := {INLINE_SIZE size | PARALLEL parallelism_level}
342
----
343

344
Parameters:
345

346
* `indexName` - the name of the index to be created. The index name must be unique per schema.
347
* `ASC` - specifies ascending sort order (default).
348
* `DESC` - specifies descending sort order.
349
* `SPATIAL` - create the spatial index. Presently, only geometry types are supported.
350
* `IF NOT EXISTS` - do not throw an error if an index with the same name already exists. The database checks indexes' names only, and does not consider columns types or count. The index creation will be skipped if an index with the same name exist in the schema.
351
* `index_option` - additional options for index creation:
352
** `INLINE_SIZE` - specifies index inline size in bytes. Depending on the size, Ignite will place the whole indexed value or a part of it directly into index pages, thus omitting extra calls to data pages and increasing queries' performance. Index inlining is enabled by default and the size is pre-calculated automatically based on the table structure. To disable inlining, set the size to 0 (not recommended). Refer to the link:SQL/sql-tuning#increasing-index-inline-size[Increasing Index Inline Size] section for more details.
353
** `PARALLEL` - specifies the number of threads to be used in parallel for index creation. The greater number is set, the faster the index is created and built. If the value exceeds the number of CPUs, then it will be decreased to the number of cores. If the parameter is not specified, then the number of threads is calculated as 25% of the CPU cores available.
354

355

356
`CREATE INDEX` creates a new index on the specified table. Regular indexes are stored in the internal B+tree data structures. The B+tree gets distributed across the cluster along with the actual data. A cluster node stores a part of the index for the data it owns.
357

358
If `CREATE INDEX` is executed in runtime on live data then the database will iterate over the specified columns synchronously indexing them. The rest of the DDL commands targeting the same table are blocked until CREATE INDEX is in progress. DML command execution is not affected and can be performed in parallel.
359

360
Schema changes applied by this command are persisted on disk if link:persistence/native-persistence[Ignite persistence] is enabled. Thus, the changes can survive full cluster restarts.
361

362

363

364
=== Indexes Tradeoffs
365
There are multiple things you should consider when choosing indexes for your application.
366

367
- Indexes are not free. They consume memory, and each index needs to be updated separately, thus the performance of write operations might drop if too many indexes are created. On top of that, if a lot of indexes are defined, the optimizer might make more mistakes by choosing the wrong index while building the execution plan.
368
+
369
WARNING: It is poor strategy to index everything.
370

371
- Indexes are just sorted data structures (B+tree). If you define an index for the fields (a,b,c) then the records will be sorted first by a, then by b and only then by c.
372
+
373
[NOTE]
374
====
375
[discrete]
376
=== Example of Sorted Index
377
[width="25%" cols="33l, 33l, 33l"]
378
|=====
379
| A | B | C
380
| 1 | 2 | 3
381
| 1 | 4 | 2
382
| 1 | 4 | 4
383
| 2 | 3 | 5
384
| 2 | 4 | 4
385
| 2 | 4 | 5
386
|=====
387

388
Any condition like `a = 1 and b > 3` can be viewed as a bounded range, both bounds can be quickly looked up in *log(N)* time, the result will be everything between.
389

390
The following conditions will be able to use the index:
391

392
- `a = ?`
393
- `a = ? and b = ?`
394
- `a = ? and b = ? and c = ?`
395

396
Condition `a = ? and c = ?` is no better than `a = ?` from the index point of view.
397
Obviously half-bounded ranges like `a > ?` can be used as well.
398
====
399

400
- Indexes on single fields are no better than group indexes on multiple fields starting with the same field (index on (a) is no better than (a,b,c)). Thus it is preferable to use group indexes.
401

402
- When `INLINE_SIZE` option is specified, indexes holds a prefix of field data in the B+tree pages. This improves search performance by doing less row data retrievals, however substantially increases size of the tree (with a moderate increase in tree height) and reduces data insertion and removal performance due to excessive page splits and merges. It's a good idea to consider page size when choosing inlining size for the tree: each B-tree entry requires `16 + inline-size` bytes in the page (plus header and extra links for the page).
403

404

405
Examples:
406

407
Create a regular index:
408

409
[source,sql]
410
----
411
CREATE INDEX title_idx ON books (title);
412
----
413

414
Create a descending index only if it does not exist:
415

416
[source,sql]
417
----
418
CREATE INDEX IF NOT EXISTS name_idx ON persons (firstName DESC);
419
----
420

421
Create a composite index:
422

423
[source,sql]
424
----
425
CREATE INDEX city_idx ON sales (country, city);
426
----
427

428
Create an index specifying data inline size:
429

430
[source,sql]
431
----
432
CREATE INDEX fast_city_idx ON sales (country, city) INLINE_SIZE 60;
433
----
434

435
Create a geospatial​ index:
436

437
[source,sql]
438
----
439
CREATE SPATIAL INDEX idx_person_address ON Person (address);
440
----
441

442

443
== DROP INDEX
444

445
`DROP INDEX` deletes an existing index.
446

447

448
[source,sql]
449
----
450
DROP INDEX [IF EXISTS] indexName
451
----
452

453
Parameters:
454

455
* `indexName` - the name of the index to drop.
456
* `IF EXISTS` - do not throw an error if an index with the specified name does not exist. The database checks indexes' names only not considering column types or count.
457

458

459
DDL commands targeting the same table are blocked until `DROP INDEX` is in progress. DML command execution is not affected and can be performed in parallel.
460

461
Schema changes applied by this command are persisted on disk if link:persistence/native-persistence[Ignite persistence] is enabled. Thus, the changes can survive full cluster restarts.
462

463

464
[discrete]
465
=== Examples
466
Drop an index:
467

468

469
[source,sql]
470
----
471
DROP INDEX idx_person_name;
472
----
473

474

475
== CREATE USER
476

477
The command creates a user with a given name and password.
478

479
A new user can only be created using a superuser account when authentication for thin clients is enabled. Ignite creates the superuser account under the name `ignite` and password `ignite` on the first cluster start-up. Presently, you can't rename the superuser account nor grant its privileges to any other account.
480

481

482

483
[source,sql]
484
----
485
CREATE USER userName WITH PASSWORD 'password';
486
----
487

488
Parameters:
489

490
* `userName` - new user's name. The name cannot be longer than 60 bytes in UTF8 encoding.
491
* `password` - new user's password. An empty password is not allowed.
492

493
To create a _case-sensitive_ username, use the quotation (") SQL identifier.
494

495
[NOTE]
496
====
497
[discrete]
498
=== When Are Case-Sensitive Names Preferred?
499
The case-insensitivity property of the usernames is supported for JDBC and ODBC interfaces only. If it's planned to access Ignite from Java, .NET, or other programming language APIs then the username has to be passed either in all upper-case letters or enclosed in double quotes (") from those interfaces.
500

501
For instance, if `Test` was set as a username then:
502

503
- You can use `Test`, `TEst`, `TEST` and other combinations from JDBC and ODBC.
504
- You can use either `TEST` or `"Test"` as the username from Ignite's native SQL APIs designed for Java, .NET and other programming languages.
505

506
Alternatively, use the case-sensitive username at all times to ensure name consistency across all the SQL interfaces.
507
====
508

509
Examples:
510

511
Create a new user using test as a name and password:
512

513

514
[source,sql]
515
----
516
CREATE USER test WITH PASSWORD 'test';
517
----
518

519
Create a case-sensitive username:
520

521

522
[source,sql]
523
----
524
CREATE USER "TeSt" WITH PASSWORD 'test'
525
----
526

527

528
== ALTER USER
529

530
The command changes an existing user's password.
531
The password can be updated by the superuser (`ignite`, see <<CREATE USER>> for more details) or by the user themselves.
532

533

534
[source,sql]
535
----
536
ALTER USER userName WITH PASSWORD 'newPassword';
537
----
538

539

540
Parameters:
541

542
* `userName` - existing user's name.
543
* `newPassword` - the new password to set for the user's account.
544

545

546
Examples:
547

548
Updating user's password:
549

550

551
[source,sql]
552
----
553
ALTER USER test WITH PASSWORD 'test123';
554
----
555

556

557
== DROP USER
558

559
The command removes an existing user.
560

561
The user can be removed only by the superuser (`ignite`, see <<CREATE USER>> for more details).
562

563

564
[source,sql]
565
----
566
DROP USER userName;
567
----
568

569

570
Parameters:
571

572
* `userName` - a name of the user to remove.
573

574

575
Examples:
576

577
[source,sql]
578
----
579
DROP USER test;
580
----
581

582
== ANALYZE
583

584
The ANALYZE command collects link:SQL/sql-statistics[statistics,window=_blank].
585

586
[source,sql]
587
----
588
ANALYZE 'schemaName'.'tableName'(column1, column2);
589
----
590

591
Parameters:
592

593
* `schemaName` - a name of the schema to collect statistics for.
594
* `tableName` - a name of the table to collect statistics for.
595
* `(column1, column2)` - names of the columns to collect statistics for.
596

597
image::images/svg/analyze_bnf1.svg[Embedded,opts=inline]
598

599
image::images/svg/analyze_bnf2.svg[Embedded,opts=inline]
600

601
When the ANALYZE command is used with `with` parameters statement, specified parameters are applied for every target. For example:
602

603
[source,sql]
604
----
605
ANALYZE public.statistics_test, statistics_test2, statistics_test3(col3) WITH 'MAX_CHANGED_PARTITION_ROWS_PERCENT=15,NULLS=0'
606
----
607

608
Possible parameters:
609

610
* MAX_CHANGED_PARTITION_ROWS_PERCENT - Maximum percentage of outdated rows in the table (the default value is 15%). See the link:SQL/sql-statistics#statistics-obsolescence[SQL Statistics,window=_blank] page for more details.
611
* NULLS - Number of null values in column.
612
* TOTAL - Total number of column values.
613
* SIZE - Average size of column values (in bytes).
614
* DISTINCT - Number of distinct non-null values in column.
615

616
== REFRESH STATISTICS
617

618
The command refreshes link:SQL/sql-statistics[statistics,window=_blank].
619

620
[source,sql]
621
----
622
REFRESH STATISTICS 'schemaName'.'tableName'(column1, column2);
623
----
624

625
Parameters:
626

627
* `schemaName` - a name of the schema to refresh statistics for.
628
* `tableName` - a name of the table to refresh statistics for.
629
* `(column1, column2)` - names of the columns to refresh statistics for.
630

631
image::images/svg/refresh_bnf.svg[Embedded,opts=inline]
632

633
Example:
634

635
[source,sql]
636
----
637
REFRESH STATISTICS PRODUCTS, SALE(productId, discount)
638
----
639

640
== DROP STATISTICS
641

642
The command drops link:SQL/sql-statistics[statistics,window=_blank].
643

644
[source,sql]
645
----
646
DROP STATISTICS 'schemaName'.'tableName'(column1, column2);
647
----
648

649
Parameters:
650

651
* `schemaName` - a name of the schema to drop statistics for.
652
* `tableName` - a name of the table to drop statistics for.
653
* `(column1, column2)` - names of the columns to drop statistics for.
654

655
image::images/svg/drop_bnf.svg[Embedded,opts=inline]
656

657
Example:
658

659
[source,sql]
660
----
661
DROP STATISTICS USERS, ORDERS(customerId, productId)
662
----
663

664

665

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.