apache-ignite

Форк
0
/
persistence-tuning.adoc 
258 строк · 11.2 Кб
1
// Licensed to the Apache Software Foundation (ASF) under one or more
2
// contributor license agreements.  See the NOTICE file distributed with
3
// this work for additional information regarding copyright ownership.
4
// The ASF licenses this file to You under the Apache License, Version 2.0
5
// (the "License"); you may not use this file except in compliance with
6
// the License.  You may obtain a copy of the License at
7
//
8
// http://www.apache.org/licenses/LICENSE-2.0
9
//
10
// Unless required by applicable law or agreed to in writing, software
11
// distributed under the License is distributed on an "AS IS" BASIS,
12
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
// See the License for the specific language governing permissions and
14
// limitations under the License.
15
= Persistence Tuning
16
:javaFile: {javaCodeDir}/PersistenceTuning.java
17
:xmlFile: code-snippets/xml/persistence-tuning.xml
18
:dotnetFile: code-snippets/dotnet/PersistenceTuning.cs
19

20
This article summarizes best practices for Ignite native persistence tuning.
21
If you are using an external (3rd party) storage for persistence needs, please refer to performance guides from the 3rd party vendor.
22

23
== Adjusting Page Size
24

25
The `DataStorageConfiguration.pageSize` parameter should be no less than the lower of: the page size of your storage media (SSD, Flash, HDD, etc.) and the cache page size of your operating system.
26
The default value is 4KB.
27

28
The operating system's cache page size can be easily checked using
29
link:https://unix.stackexchange.com/questions/128213/how-is-page-size-determined-in-virtual-address-space[system tools and parameters, window=_blank].
30

31
The page size of the storage device such as SSD is usually noted in the device specification. If the manufacturer does not disclose this information, try to run SSD benchmarks to figure out the number.
32
Many manufacturers have to adapt their drivers for 4 KB random-write workloads because a variety of standard
33
benchmarks use 4 KB by default.
34
link:https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ssd-server-storage-applications-paper.pdf[This white paper,window=_blank] from Intel confirms that 4 KB should be enough.
35

36
Once you pick the most optimal page size, apply it in your cluster configuration:
37

38
////
39
TODO for .NET and other languages.
40
////
41

42
[tabs]
43
--
44
tab:XML[]
45
[source,xml]
46
----
47
include::{xmlFile}[tags=!*;ignite-config;ds;page-size,indent=0]
48
----
49
tab:Java[]
50
[source,java]
51
----
52
include::{javaFile}[tag=page-size,indent=0]
53
----
54
tab:C#/.NET[]
55
[source,csharp]
56
----
57
include::{dotnetFile}[tag=page-size,indent=0]
58
----
59
tab:C++[unsupported]
60
--
61

62
== Keep WALs Separately
63

64
Consider using separate drives for data files and link:persistence/native-persistence#write-ahead-log[Write-Ahead-Logging (WAL)].
65
Ignite actively writes to both the data and WAL files.
66

67
The example below shows how to configure separate paths for the data storage, WAL, and WAL archive:
68

69
[tabs]
70
--
71
tab:XML[]
72
[source,xml]
73
----
74
include::{xmlFile}[tags=!*;ignite-config;ds;paths,indent=0]
75
----
76
tab:Java[]
77
[source,java]
78
----
79
include::{javaFile}[tag=separate-wal,indent=0]
80
----
81
tab:C#/.NET[]
82
[source,csharp]
83
----
84
include::{dotnetFile}[tag=separate-wal,indent=0]
85
----
86
tab:C++[unsupported]
87
--
88

89
== Increasing WAL Segment Size
90

91
The default WAL segment size (64 MB) may be inefficient in high load scenarios because it causes WAL to switch between segments too frequently and switching/rotation is a costly operation. Setting the segment size to a higher value (up to 2 GB) may help reduce the number of switching operations. However, the tradeoff is that this will increase the overall volume of the write-ahead log.
92

93
See link:persistence/native-persistence#changing-wal-segment-size[Changing WAL Segment Size] for details.
94

95
== Changing WAL Mode
96

97
Consider other WAL modes as alternatives to the default mode. Each mode provides different degrees of reliability in
98
case of node failure and that degree is inversely proportional to speed, i.e. the more reliable the WAL mode, the
99
slower it is. Therefore, if your use case does not require high reliability, you can switch to a less reliable mode.
100

101
See link:persistence/native-persistence#wal-modes[WAL Modes] for more details.
102

103
== Disabling WAL
104

105
//TODO: when should bhis be done?
106
There are situations where link:persistence/native-persistence#disabling-wal[disabling the WAL] can help improve performance.
107

108
== Pages Writes Throttling
109

110
Ignite periodically starts the link:persistence/native-persistence#checkpointing[checkpointing process] that syncs dirty pages from memory to disk. A dirty page is a page that was updated in RAM but was not written to a respective partition file (an update was just appended to the WAL). This process happens in the background without affecting the application's logic.
111

112
However, if a dirty page, scheduled for checkpointing, is updated before being written to disk, its previous state is copied to a special region called a checkpointing buffer.
113
If the buffer gets overflowed, Ignite will stop processing all updates until the checkpointing is over.
114
As a result, write performance can drop to zero as shown in​ this diagram, until the checkpointing cycle is completed:
115

116
image::images/checkpointing-chainsaw.png[Checkpointing Chainsaw]
117

118
The same situation occurs if the dirty pages threshold is reached again while the checkpointing is in progress.
119
This will force Ignite to schedule one more checkpointing execution and to halt all the update operations until the first checkpointing cycle is over.
120

121
Both situations usually arise when either a disk device is slow or the update rate is too intensive.
122
To mitigate and prevent these performance drops, consider enabling the pages write throttling algorithm.
123
The algorithm brings the performance of update operations down to the speed of the disk device whenever the checkpointing buffer fills in too fast or the percentage of dirty pages soar rapidly.
124

125
[NOTE]
126
====
127
[discrete]
128
=== Pages Write Throttling in a Nutshell
129

130
Refer to the link:https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-PagesWriteThrottling[Ignite wiki page, window=_blank] maintained by Apache Ignite persistence experts to get more details about throttling and its causes.
131
====
132

133
The example below shows how to enable write throttling:
134

135
[tabs]
136
--
137
tab:XML[]
138
[source,xml]
139
----
140
include::{xmlFile}[tags=!*;ignite-config;ds;page-write-throttling,indent=0]
141
----
142
tab:Java[]
143
[source,java]
144
----
145
include::{javaFile}[tag=throttling,indent=0]
146
----
147
tab:C#/.NET[]
148
[source,csharp]
149
----
150
include::{dotnetFile}[tag=throttling,indent=0]
151
----
152
tab:C++[unsupported]
153
--
154

155
== Adjusting Checkpointing Buffer Size
156

157
The size of the checkpointing buffer, explained in the previous section, is one of the checkpointing process triggers.
158

159
The default buffer size is calculated as a function of the link:memory-configuration/data-regions[data region] size:
160

161
[width=100%,cols="1,2",options="header"]
162
|=======================================================================
163
| Data Region Size |Default Checkpointing Buffer Size
164

165
|< 1 GB | MIN (256 MB, Data_Region_Size)
166

167
|between 1 GB and 8 GB | Data_Region_Size / 4
168

169
|> 8 GB | 2 GB
170

171
|=======================================================================
172

173
The default buffer size can be suboptimal for write-intensive workloads because the page write throttling algorithm will slow down your writes whenever the size reaches the critical mark.
174
To keep write performance at the desired pace while the checkpointing is in progress, consider increasing
175
`DataRegionConfiguration.checkpointPageBufferSize` and enabling write throttling to prevent performance​ drops:
176

177
[tabs]
178
--
179
tab:XML[]
180
[source,xml]
181
----
182
include::{xmlFile}[tags=!*;ignite-config;ds;page-write-throttling;data-region,indent=0]
183
----
184
tab:Java[]
185
[source,java]
186
----
187
include::{javaFile}[tag=checkpointing-buffer-size,indent=0]
188
----
189
tab:C#/.NET[]
190
[source,csharp]
191
----
192
include::{dotnetFile}[tag=checkpointing-buffer-size,indent=0]
193
----
194
tab:C++[unsupported]
195
--
196

197
In the example above, the checkpointing buffer size of the default region is set to 1 GB.
198

199
////
200
TODO: describe when checkpointing is triggered
201
[NOTE]
202
====
203
[discrete]
204
=== When is the Checkpointing Process Triggered?
205

206
Checkpointing is started if either the dirty pages count goes beyond the `totalPages * 2 / 3` value or
207
`DataRegionConfiguration.checkpointPageBufferSize` is reached. However, if page write throttling is used, then
208
`DataRegionConfiguration.checkpointPageBufferSize` is never encountered because it cannot be reached due to the way the algorithm works.
209
====
210
////
211

212
== Enabling Direct I/O
213
//TODO: why is this not enabled by default?
214
Usually, whenever an application reads data from disk, the OS gets the data and puts it in a file buffer cache first.
215
Similarly, for every write operation, the OS first writes the data in the cache and transfers it to disk later. To
216
eliminate this process, you can enable Direct I/O in which case the data is read and written directly from/to the
217
disk, bypassing the file buffer cache.
218

219
The Direct I/O module in Ignite is used to speed up the checkpointing process, which writes dirty pages from RAM to disk. Consider using the Direct I/O plugin for write-intensive workloads.
220

221
[NOTE]
222
====
223
[discrete]
224
=== Direct I/O and WALs
225

226
Note that Direct I/O cannot be enabled specifically for WAL files. However, enabling the Direct I/O module provides
227
a slight benefit regarding the WAL files as well: the WAL data will not be stored in the OS buffer cache for too long;
228
it will be flushed (depending on the WAL mode) at the next page cache scan and removed from the page cache.
229
====
230

231
You can enable Direct I/O, move the `{IGNITE_HOME}/libs/optional/ignite-direct-io` folder to the upper level `libs/optional/ignite-direct-io` folder in your Ignite distribution or as a Maven dependency as described link:setup#enabling-modules[here].
232

233
You can use the `IGNITE_DIRECT_IO_ENABLED` system property to enable or disable the plugin at runtime.
234

235
Get more details from the link:https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-DirectI/O[Ignite Direct I/O Wiki section, window=_blank].
236

237
== Purchase Production-Level SSDs
238

239
Note that the performance of Ignite Native Persistence may drop after several hours of intensive write load due to
240
the nature of how link:http://codecapsule.com/2014/02/12/coding-for-ssds-part-2-architecture-of-an-ssd-and-benchmarking[SSDs are designed and operate, window=_blank].
241
Consider buying fast production-level SSDs to keep the performance high or switch to non-volatile memory devices like
242
Intel Optane Persistent Memory.
243

244
== SSD Over-provisioning
245

246
Performance of random writes on a 50% filled disk is much better than on a 90% filled disk because of the SSDs over-provisioning (see link:https://www.seagate.com/tech-insights/ssd-over-provisioning-benefits-master-ti[https://www.seagate.com/tech-insights/ssd-over-provisioning-benefits-master-ti, window=_blank]).
247

248
Consider buying SSDs with higher over-provisioning rates and make sure the manufacturer provides the tools to adjust it.
249

250
[NOTE]
251
====
252
[discrete]
253
=== Intel 3D XPoint
254

255
Consider using 3D XPoint drives instead of regular SSDs to avoid the bottlenecks caused by a low over-provisioning
256
setting and constant garbage collection at the SSD level.
257
Read more link:http://dmagda.blogspot.com/2017/10/3d-xpoint-outperforms-ssds-verified-on.html[here, window=_blank].
258
====
259

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.