apache-ignite
258 строк · 11.2 Кб
1// Licensed to the Apache Software Foundation (ASF) under one or more
2// contributor license agreements. See the NOTICE file distributed with
3// this work for additional information regarding copyright ownership.
4// The ASF licenses this file to You under the Apache License, Version 2.0
5// (the "License"); you may not use this file except in compliance with
6// the License. You may obtain a copy of the License at
7//
8// http://www.apache.org/licenses/LICENSE-2.0
9//
10// Unless required by applicable law or agreed to in writing, software
11// distributed under the License is distributed on an "AS IS" BASIS,
12// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13// See the License for the specific language governing permissions and
14// limitations under the License.
15= Persistence Tuning
16:javaFile: {javaCodeDir}/PersistenceTuning.java
17:xmlFile: code-snippets/xml/persistence-tuning.xml
18:dotnetFile: code-snippets/dotnet/PersistenceTuning.cs
19
20This article summarizes best practices for Ignite native persistence tuning.
21If you are using an external (3rd party) storage for persistence needs, please refer to performance guides from the 3rd party vendor.
22
23== Adjusting Page Size
24
25The `DataStorageConfiguration.pageSize` parameter should be no less than the lower of: the page size of your storage media (SSD, Flash, HDD, etc.) and the cache page size of your operating system.
26The default value is 4KB.
27
28The operating system's cache page size can be easily checked using
29link:https://unix.stackexchange.com/questions/128213/how-is-page-size-determined-in-virtual-address-space[system tools and parameters, window=_blank].
30
31The page size of the storage device such as SSD is usually noted in the device specification. If the manufacturer does not disclose this information, try to run SSD benchmarks to figure out the number.
32Many manufacturers have to adapt their drivers for 4 KB random-write workloads because a variety of standard
33benchmarks use 4 KB by default.
34link:https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ssd-server-storage-applications-paper.pdf[This white paper,window=_blank] from Intel confirms that 4 KB should be enough.
35
36Once you pick the most optimal page size, apply it in your cluster configuration:
37
38////
39TODO for .NET and other languages.
40////
41
42[tabs]
43--
44tab:XML[]
45[source,xml]
46----
47include::{xmlFile}[tags=!*;ignite-config;ds;page-size,indent=0]
48----
49tab:Java[]
50[source,java]
51----
52include::{javaFile}[tag=page-size,indent=0]
53----
54tab:C#/.NET[]
55[source,csharp]
56----
57include::{dotnetFile}[tag=page-size,indent=0]
58----
59tab:C++[unsupported]
60--
61
62== Keep WALs Separately
63
64Consider using separate drives for data files and link:persistence/native-persistence#write-ahead-log[Write-Ahead-Logging (WAL)].
65Ignite actively writes to both the data and WAL files.
66
67The example below shows how to configure separate paths for the data storage, WAL, and WAL archive:
68
69[tabs]
70--
71tab:XML[]
72[source,xml]
73----
74include::{xmlFile}[tags=!*;ignite-config;ds;paths,indent=0]
75----
76tab:Java[]
77[source,java]
78----
79include::{javaFile}[tag=separate-wal,indent=0]
80----
81tab:C#/.NET[]
82[source,csharp]
83----
84include::{dotnetFile}[tag=separate-wal,indent=0]
85----
86tab:C++[unsupported]
87--
88
89== Increasing WAL Segment Size
90
91The default WAL segment size (64 MB) may be inefficient in high load scenarios because it causes WAL to switch between segments too frequently and switching/rotation is a costly operation. Setting the segment size to a higher value (up to 2 GB) may help reduce the number of switching operations. However, the tradeoff is that this will increase the overall volume of the write-ahead log.
92
93See link:persistence/native-persistence#changing-wal-segment-size[Changing WAL Segment Size] for details.
94
95== Changing WAL Mode
96
97Consider other WAL modes as alternatives to the default mode. Each mode provides different degrees of reliability in
98case of node failure and that degree is inversely proportional to speed, i.e. the more reliable the WAL mode, the
99slower it is. Therefore, if your use case does not require high reliability, you can switch to a less reliable mode.
100
101See link:persistence/native-persistence#wal-modes[WAL Modes] for more details.
102
103== Disabling WAL
104
105//TODO: when should bhis be done?
106There are situations where link:persistence/native-persistence#disabling-wal[disabling the WAL] can help improve performance.
107
108== Pages Writes Throttling
109
110Ignite periodically starts the link:persistence/native-persistence#checkpointing[checkpointing process] that syncs dirty pages from memory to disk. A dirty page is a page that was updated in RAM but was not written to a respective partition file (an update was just appended to the WAL). This process happens in the background without affecting the application's logic.
111
112However, if a dirty page, scheduled for checkpointing, is updated before being written to disk, its previous state is copied to a special region called a checkpointing buffer.
113If the buffer gets overflowed, Ignite will stop processing all updates until the checkpointing is over.
114As a result, write performance can drop to zero as shown in this diagram, until the checkpointing cycle is completed:
115
116image::images/checkpointing-chainsaw.png[Checkpointing Chainsaw]
117
118The same situation occurs if the dirty pages threshold is reached again while the checkpointing is in progress.
119This will force Ignite to schedule one more checkpointing execution and to halt all the update operations until the first checkpointing cycle is over.
120
121Both situations usually arise when either a disk device is slow or the update rate is too intensive.
122To mitigate and prevent these performance drops, consider enabling the pages write throttling algorithm.
123The algorithm brings the performance of update operations down to the speed of the disk device whenever the checkpointing buffer fills in too fast or the percentage of dirty pages soar rapidly.
124
125[NOTE]
126====
127[discrete]
128=== Pages Write Throttling in a Nutshell
129
130Refer to the link:https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-PagesWriteThrottling[Ignite wiki page, window=_blank] maintained by Apache Ignite persistence experts to get more details about throttling and its causes.
131====
132
133The example below shows how to enable write throttling:
134
135[tabs]
136--
137tab:XML[]
138[source,xml]
139----
140include::{xmlFile}[tags=!*;ignite-config;ds;page-write-throttling,indent=0]
141----
142tab:Java[]
143[source,java]
144----
145include::{javaFile}[tag=throttling,indent=0]
146----
147tab:C#/.NET[]
148[source,csharp]
149----
150include::{dotnetFile}[tag=throttling,indent=0]
151----
152tab:C++[unsupported]
153--
154
155== Adjusting Checkpointing Buffer Size
156
157The size of the checkpointing buffer, explained in the previous section, is one of the checkpointing process triggers.
158
159The default buffer size is calculated as a function of the link:memory-configuration/data-regions[data region] size:
160
161[width=100%,cols="1,2",options="header"]
162|=======================================================================
163| Data Region Size |Default Checkpointing Buffer Size
164
165|< 1 GB | MIN (256 MB, Data_Region_Size)
166
167|between 1 GB and 8 GB | Data_Region_Size / 4
168
169|> 8 GB | 2 GB
170
171|=======================================================================
172
173The default buffer size can be suboptimal for write-intensive workloads because the page write throttling algorithm will slow down your writes whenever the size reaches the critical mark.
174To keep write performance at the desired pace while the checkpointing is in progress, consider increasing
175`DataRegionConfiguration.checkpointPageBufferSize` and enabling write throttling to prevent performance drops:
176
177[tabs]
178--
179tab:XML[]
180[source,xml]
181----
182include::{xmlFile}[tags=!*;ignite-config;ds;page-write-throttling;data-region,indent=0]
183----
184tab:Java[]
185[source,java]
186----
187include::{javaFile}[tag=checkpointing-buffer-size,indent=0]
188----
189tab:C#/.NET[]
190[source,csharp]
191----
192include::{dotnetFile}[tag=checkpointing-buffer-size,indent=0]
193----
194tab:C++[unsupported]
195--
196
197In the example above, the checkpointing buffer size of the default region is set to 1 GB.
198
199////
200TODO: describe when checkpointing is triggered
201[NOTE]
202====
203[discrete]
204=== When is the Checkpointing Process Triggered?
205
206Checkpointing is started if either the dirty pages count goes beyond the `totalPages * 2 / 3` value or
207`DataRegionConfiguration.checkpointPageBufferSize` is reached. However, if page write throttling is used, then
208`DataRegionConfiguration.checkpointPageBufferSize` is never encountered because it cannot be reached due to the way the algorithm works.
209====
210////
211
212== Enabling Direct I/O
213//TODO: why is this not enabled by default?
214Usually, whenever an application reads data from disk, the OS gets the data and puts it in a file buffer cache first.
215Similarly, for every write operation, the OS first writes the data in the cache and transfers it to disk later. To
216eliminate this process, you can enable Direct I/O in which case the data is read and written directly from/to the
217disk, bypassing the file buffer cache.
218
219The Direct I/O module in Ignite is used to speed up the checkpointing process, which writes dirty pages from RAM to disk. Consider using the Direct I/O plugin for write-intensive workloads.
220
221[NOTE]
222====
223[discrete]
224=== Direct I/O and WALs
225
226Note that Direct I/O cannot be enabled specifically for WAL files. However, enabling the Direct I/O module provides
227a slight benefit regarding the WAL files as well: the WAL data will not be stored in the OS buffer cache for too long;
228it will be flushed (depending on the WAL mode) at the next page cache scan and removed from the page cache.
229====
230
231You can enable Direct I/O, move the `{IGNITE_HOME}/libs/optional/ignite-direct-io` folder to the upper level `libs/optional/ignite-direct-io` folder in your Ignite distribution or as a Maven dependency as described link:setup#enabling-modules[here].
232
233You can use the `IGNITE_DIRECT_IO_ENABLED` system property to enable or disable the plugin at runtime.
234
235Get more details from the link:https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-DirectI/O[Ignite Direct I/O Wiki section, window=_blank].
236
237== Purchase Production-Level SSDs
238
239Note that the performance of Ignite Native Persistence may drop after several hours of intensive write load due to
240the nature of how link:http://codecapsule.com/2014/02/12/coding-for-ssds-part-2-architecture-of-an-ssd-and-benchmarking[SSDs are designed and operate, window=_blank].
241Consider buying fast production-level SSDs to keep the performance high or switch to non-volatile memory devices like
242Intel Optane Persistent Memory.
243
244== SSD Over-provisioning
245
246Performance of random writes on a 50% filled disk is much better than on a 90% filled disk because of the SSDs over-provisioning (see link:https://www.seagate.com/tech-insights/ssd-over-provisioning-benefits-master-ti[https://www.seagate.com/tech-insights/ssd-over-provisioning-benefits-master-ti, window=_blank]).
247
248Consider buying SSDs with higher over-provisioning rates and make sure the manufacturer provides the tools to adjust it.
249
250[NOTE]
251====
252[discrete]
253=== Intel 3D XPoint
254
255Consider using 3D XPoint drives instead of regular SSDs to avoid the bottlenecks caused by a low over-provisioning
256setting and constant garbage collection at the SSD level.
257Read more link:http://dmagda.blogspot.com/2017/10/3d-xpoint-outperforms-ssds-verified-on.html[here, window=_blank].
258====
259