glusterfs

Форк
0
/
gluster.8 
353 строки · 18.3 Кб
1

2
.\"  Copyright (c) 2006-2012 Red Hat, Inc. <http://www.redhat.com>
3
.\"  This file is part of GlusterFS.
4
.\"
5
.\"  This file is licensed to you under your choice of the GNU Lesser
6
.\"  General Public License, version 3 or any later version (LGPLv3 or
7
.\"  later), or the GNU General Public License, version 2 (GPLv2), in all
8
.\"  cases as published by the Free Software Foundation.
9
.\"
10
.\"
11
.TH Gluster 8 "Gluster command line utility" "07 March 2011" "Gluster Inc."
12
.SH NAME
13
gluster - Gluster Console Manager (command line utility)
14
.SH SYNOPSIS
15
.B gluster
16
.PP
17
To run the program and display gluster prompt:
18
.PP
19
.B gluster [--remote-host=<gluster_node>] [--mode=script] [--xml]
20
.PP
21
(or)
22
.PP
23
To specify a command directly:
24
.PP
25
.B gluster
26
.I  [commands] [options] [--remote-host=<gluster_node>] [--mode=script] [--xml]
27
.SH DESCRIPTION
28
The Gluster Console Manager is a command line utility for elastic volume management. You can run the gluster command on any export server. The command enables administrators to perform cloud operations, such as creating, expanding, shrinking, rebalancing, and migrating volumes without needing to schedule server downtime.
29
.SH COMMANDS
30

31
.SS "Volume Commands"
32
.PP
33
.TP
34

35
\fB\ volume info [all|<VOLNAME>] \fR
36
Display information about all volumes, or the specified volume.
37
.TP
38
\fB\ volume list \fR
39
List all volumes in cluster
40
.TP
41
\fB\ volume status [all | <VOLNAME> [nfs|shd|<BRICK>|quotad]] [detail|clients|mem|inode|fd|callpool|tasks|client-list] \fR
42
Display status of all or specified volume(s)/brick
43
.TP
44
\fB\ volume create <NEW-VOLNAME> [stripe <COUNT>] [[replica <COUNT> [arbiter <COUNT>]]|[replica 2 thin-arbiter 1]] [disperse [<COUNT>]] [disperse-data <COUNT>] [redundancy <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK> ... <TA-BRICK> \fR
45
Create a new volume of the specified type using the specified bricks and transport type (the default transport type is tcp).
46
To create a volume with both transports (tcp and rdma), give 'transport tcp,rdma' as an option.
47
.TP
48
\fB\ volume delete <VOLNAME> \fR
49
Delete the specified volume.
50
.TP
51
\fB\ volume start <VOLNAME> \fR
52
Start the specified volume.
53
.TP
54
\fB\ volume stop <VOLNAME> [force] \fR
55
Stop the specified volume.
56
.TP
57
\fB\ volume set <VOLNAME> <OPTION> <PARAMETER> [<OPTION> <PARAMETER>] ... \fR
58
Set the volume options.
59
.TP
60
\fB\ volume get <VOLNAME/all> <OPTION/all> \fR
61
Get the value of the all options or given option for volume <VOLNAME> or all option. gluster volume get all all is to get all global options
62
.TP
63
\fB\ volume reset <VOLNAME> [option] [force] \fR
64
Reset all the reconfigured options
65
.TP
66
\fB\ volume barrier <VOLNAME> {enable|disable} \fR
67
Barrier/unbarrier file operations on a volume
68
.TP
69
\fB\ volume clear-locks <VOLNAME> <path> kind {blocked|granted|all}{inode [range]|entry [basename]|posix [range]} \fR
70
Clear locks held on path
71
.TP
72
\fB\ volume help \fR
73
Display help for the volume command.
74
.SS "Brick Commands"
75
.PP
76
.TP
77
\fB\ volume add-brick <VOLNAME> <NEW-BRICK> ... \fR
78
Add the specified brick to the specified volume.
79
.TP
80
\fB\ volume remove-brick <VOLNAME> <BRICK> ... \fR
81
Remove the specified brick from the specified volume.
82
.IP
83
.B Note:
84
If you remove the brick, the data stored in that brick will not be available. You can migrate data from one brick to another using
85
.B replace-brick
86
option.
87
.TP
88
\fB\ volume reset-brick <VOLNAME> <SOURCE-BRICK> {{start} | {<NEW-BRICK> commit}} \fR
89
Brings down or replaces the specified source brick with the new brick.
90
.TP
91
\fB\ volume replace-brick <VOLNAME> <SOURCE-BRICK> <NEW-BRICK> commit force \fR
92
Replace the specified source brick with a new brick.
93
.TP
94
\fB\ volume rebalance <VOLNAME> start \fR
95
Start rebalancing the specified volume.
96
.TP
97
\fB\ volume rebalance <VOLNAME> stop \fR
98
Stop rebalancing the specified volume.
99
.TP
100
\fB\ volume rebalance <VOLNAME> status \fR
101
Display the rebalance status of the specified volume.
102
.SS "Log Commands"
103
.TP
104
\fB\ volume log <VOLNAME> rotate [BRICK] \fB
105
Rotate the log file for corresponding volume/brick.
106
.TP
107
\fB\ volume profile <VOLNAME> {start|info [peek|incremental [peek]|cumulative|clear]|stop} [nfs] \fR
108
Profile operations on the volume. Once started, volume profile <volname> info provides cumulative statistics of the FOPs performed.
109
.TP
110
\fB\ volume top <VOLNAME> {open|read|write|opendir|readdir|clear} [nfs|brick <brick>] [list-cnt <value>] | {read-perf|write-perf} [bs <size> count <count>] [brick <brick>] [list-cnt <value>] \fR
111
Generates a profile of a volume representing the performance and bottlenecks/hotspots of each brick.
112
.TP
113
\fB\ volume statedump <VOLNAME> [[nfs|quotad] [all|mem|iobuf|callpool|priv|fd|inode|history]... | [client <hostname:process-id>]] \fR
114
Dumps the in memory state of the specified process or the bricks of the volume.
115
.TP
116
\fB\ volume sync <HOSTNAME> [all|<VOLNAME>] \fR
117
Sync the volume information from a peer
118
.SS "Peer Commands"
119
.TP
120
\fB\ peer probe <HOSTNAME> \fR
121
Probe the specified peer. In case the <HOSTNAME> given belongs to an already probed peer, the peer probe command will add the hostname to the peer if required.
122
.TP
123
\fB\ peer detach <HOSTNAME> \fR
124
Detach the specified peer.
125
.TP
126
\fB\ peer status \fR
127
Display the status of peers.
128
.TP
129
\fB\ pool list \fR
130
List all the nodes in the pool (including localhost)
131
.TP
132
\fB\ peer help \fR
133
Display help for the peer command.
134
.SS "Quota Commands"
135
.TP
136
\fB\ volume quota <VOLNAME> enable \fR
137
Enable quota on the specified volume. This will cause all the directories in the filesystem hierarchy to be accounted and updated thereafter on each operation in the the filesystem. To kick start this accounting, a crawl is done over the hierarchy with an auxiliary client.
138
.TP
139
\fB\ volume quota <VOLNAME> disable \fR
140
Disable quota on the volume. This will disable enforcement and accounting in the filesystem. Any configured limits will be lost.
141
.TP
142
\fB\ volume quota <VOLNAME> limit-usage <PATH> <SIZE> [<PERCENT>] \fR
143
Set a usage  limit on the given path. Any previously set limit is overridden to the new value. The soft limit can optionally be specified (as a percentage of hard limit). If soft limit percentage is not provided the default soft limit value for the volume is used to decide the soft limit.
144
.TP
145
\fB\ volume quota <VOLNAME> limit-objects <PATH> <SIZE> [<PERCENT>] \fR
146
Set an inode limit on the given path. Any previously set limit is overridden to the new value. The soft limit can optionally be specified (as a percentage of hard limit). If soft limit percentage is not provided the default soft limit value for the volume is used to decide the soft limit.
147
.TP
148
NOTE: valid units of SIZE are : B, KB, MB, GB, TB, PB. If no unit is specified, the unit defaults to bytes.
149
.TP
150
\fB\ volume quota <VOLNAME> remove <PATH> \fR
151
Remove any usage limit configured on the specified directory. Note that if any limit is configured on the ancestors of this directory (previous directories along the path), they will still be honored and enforced.
152
.TP
153
\fB\ volume quota <VOLNAME> remove-objects <PATH> \fR
154
Remove any inode limit configured on the specified directory. Note that if any limit is configured on the ancestors of this directory (previous directories along the path), they will still be honored and enforced.
155
.TP
156
\fB\ volume quota <VOLNAME> list <PATH> \fR
157
Lists the  usage and limits configured on directory(s). If a path is given only the limit that has been configured on the directory(if any) is displayed along with the directory's usage. If no path is given, usage and limits are displayed for all directories that has limits configured.
158
.TP
159
\fB\ volume quota <VOLNAME> list-objects <PATH> \fR
160
Lists the inode usage and inode limits configured on directory(s). If a path is given only the limit that has been configured on the directory(if any) is displayed along with the directory's inode usage. If no path is given, usage and limits are displayed for all directories that has limits configured.
161
.TP
162
\fB\ volume quota <VOLNAME> default-soft-limit <PERCENT> \fR
163
Set the percentage value for default soft limit for the volume.
164
.TP
165
\fB\ volume quota <VOLNAME> soft-timeout <TIME> \fR
166
Set the soft timeout for the volume. The interval in which limits are retested before the soft limit is breached.
167
.TP
168
\fB\ volume quota <VOLNAME> hard-timeout <TIME> \fR
169
Set the hard timeout for the volume. The interval in which limits are retested after the soft limit is breached.
170
.TP
171
\fB\ volume quota <VOLNAME> alert-time <TIME> \fR
172
Set the frequency in which warning messages need to be logged (in the brick logs) once soft limit is breached.
173
.TP
174
\fB\ volume inode-quota <VOLNAME> enable/disable \fR
175
Enable/disable inode-quota for <VOLNAME>
176
.TP
177
\fB\ volume quota help \fR
178
Display help for volume quota commands
179
.TP
180
NOTE: valid units of time and their symbols are : hours(h/hr), minutes(m/min), seconds(s/sec), weeks(w/wk), Days(d/days).
181
.SS "Geo-replication Commands"
182
.TP
183
\fI\ Note\fR: password-less ssh, from the primary node (where these commands are executed) to the secondary node <SECONDARY_HOST>, is a prerequisite for the geo-replication commands.
184
.TP
185
\fB\ system:: execute gsec_create\fR
186
Generates pem keys which are required for push-pem
187
.TP
188
\fB\ volume geo-replication <PRIMARY_VOL> <SECONDARY_HOST>::<SECONDARY_VOL> create [[ssh-port n][[no-verify]|[push-pem]]] [force]\fR
189
Create a new geo-replication session from <PRIMARY_VOL> to <SECONDARY_HOST> host machine having <SECONDARY_VOL>.
190
Use ssh-port n if custom SSH port is configured in secondary nodes.
191
Use no-verify if the rsa-keys of nodes in primary volume is distributed to secondary nodes through an external agent.
192
Use push-pem to push the keys automatically.
193
.TP
194
\fB\ volume geo-replication <PRIMARY_VOL> <SECONDARY_HOST>::<SECONDARY_VOL> {start|stop} [force] \fR
195
Start/stop the geo-replication session from <PRIMARY_VOL> to <SECONDARY_HOST> host machine having <SECONDARY_VOL>.
196
.TP
197
\fB\ volume geo-replication [<PRIMARY_VOL> [<SECONDARY_HOST>::<SECONDARY_VOL>]] status [detail] \fR
198
Query status of the geo-replication session from <PRIMARY_VOL> to <SECONDARY_HOST> host machine having <SECONDARY_VOL>.
199
.TP
200
\fB\ volume geo-replication <PRIMARY_VOL> <SECONDARY_HOST>::<SECONDARY_VOL> {pause|resume} [force] \fR
201
Pause/resume the geo-replication session from <PRIMARY_VOL> to <SECONDARY_HOST> host machine having <SECONDARY_VOL>.
202
.TP
203
\fB\ volume geo-replication <PRIMARY_VOL> <SECONDARY_HOST>::<SECONDARY_VOL> delete [reset-sync-time]\fR
204
Delete the geo-replication session from <PRIMARY_VOL> to <SECONDARY_HOST> host machine having <SECONDARY_VOL>.
205
Optionally you can also reset the sync time in case you need to resync the entire volume on session recreate.
206
.TP
207
\fB\ volume geo-replication <PRIMARY_VOL> <SECONDARY_HOST>::<SECONDARY_VOL> config [[!]<options> [<value>]] \fR
208
View (when no option provided) or set configuration for this geo-replication session.
209
Use "!<OPTION>" to reset option <OPTION> to default value.
210
.SS "Bitrot Commands"
211
.TP
212
\fB\ volume bitrot <VOLNAME> {enable|disable} \fR
213
Enable/disable bitrot for volume <VOLNAME>
214
.TP
215
\fB\ volume bitrot <VOLNAME> signing-time <time-in-secs> \fR
216
Waiting time for an object after last fd is closed to start signing process.
217
.TP
218
\fB\ volume bitrot <VOLNAME> signer-threads <count> \fR
219
Number of signing process threads. Usually set to number of available cores.
220
.TP
221
\fB\ volume bitrot <VOLNAME> scrub-throttle {lazy|normal|aggressive} \fR
222
Scrub-throttle value is a measure of how fast or slow the scrubber scrubs the filesystem for volume <VOLNAME>
223
.TP
224
\fB\ volume bitrot <VOLNAME> scrub-frequency {hourly|daily|weekly|biweekly|monthly} \fR
225
Scrub frequency for volume <VOLNAME>
226
.TP
227
\fB\ volume bitrot <VOLNAME> scrub {pause|resume|status|ondemand} \fR
228
Pause/Resume scrub. Upon resume, scrubber continues where it left off. status option shows the statistics of scrubber. ondemand option starts the scrubbing immediately if the scrubber is not paused or already running.
229
.TP
230
\fB\ volume bitrot help \fR
231
Display help for volume bitrot commands
232
.TP
233
.SS "Snapshot Commands"
234
.TP
235
\fB\ snapshot create <snapname> <volname> [no-timestamp] [description <description>] [force] \fR
236
Creates a snapshot of a GlusterFS volume. User can provide a snap-name and a description to identify the snap. Snap will be created by appending timestamp in GMT. User can override this behaviour using "no-timestamp" option. The description cannot be more than 1024 characters. To be able to take a snapshot, volume should be present and it should be in started state.
237
.TP
238
\fB\ snapshot restore <snapname> \fR
239
Restores an already taken snapshot of a GlusterFS volume. Snapshot restore is an offline activity therefore if the volume is online (in started state) then the restore operation will fail. Once the snapshot is restored it will not be available in the list of snapshots.
240
.TP
241
\fB\ snapshot clone <clonename> <snapname> \fR
242
Create a clone of a snapshot volume, the resulting volume will be GlusterFS volume. User can provide a clone-name. To be able to take a clone, snapshot should be present and it should be in activated state.
243
.TP
244
\fB\ snapshot delete ( all | <snapname> | volume <volname> ) \fR
245
If snapname is specified then mentioned snapshot is deleted. If volname is specified then all snapshots belonging to that particular volume is deleted. If keyword *all* is used then all snapshots belonging to the system is deleted.
246
.TP
247
\fB\ snapshot list [volname] \fR
248
Lists all snapshots taken. If volname is provided, then only the snapshots belonging to that particular volume is listed.
249
.TP
250
\fB\ snapshot info [snapname | (volume <volname>)] \fR
251
This command gives information such as snapshot name, snapshot UUID, time at which snapshot was created, and it lists down the snap-volume-name, number of snapshots already taken and number of snapshots still available for that particular volume, and the state of the snapshot. If snapname is specified then info of the  mentioned  snapshot is  displayed.  If volname is specified then info of all snapshots belonging to that volume is displayed.  If  both  snapname and  volname  is  not specified then info of all the snapshots present in the system are displayed.
252
.TP
253
\fB\ snapshot status [snapname | (volume <volname>)] \fR
254
This command gives status of the snapshot. The details included are snapshot brick path, volume group(LVM details), status of the snapshot bricks, PID of the bricks, data percentage filled for that particular volume group to which the snapshots belong to, and total size of the logical volume.
255

256
If snapname is specified then status of the mentioned snapshot is displayed. If volname is specified then status of all snapshots belonging to that volume is displayed. If both snapname and volname is not specified then status of all the snapshots present in the system are displayed.
257
.TP
258
\fB\ snapshot config [volname] ([snap-max-hard-limit <count>] [snap-max-soft-limit <percent>]) | ([auto-delete <enable|disable>]) | ([activate-on-create <enable|disable>])
259
Displays and sets the snapshot config values.
260

261
snapshot config without any keywords displays the snapshot config values of all volumes in the system. If volname is provided, then the snapshot config values of that volume is displayed.
262

263
Snapshot config command along with keywords can be used to change the existing config values. If volname is provided then config value of that volume is changed, else it will set/change the system limit.
264

265
snap-max-soft-limit and auto-delete are global options, that will be inherited by all volumes in the system and cannot be set to individual volumes.
266

267
snap-max-hard-limit can be set globally, as well as per volume. The lowest limit between the global system limit and the volume specific limit, becomes the
268
"Effective snap-max-hard-limit" for a volume.
269

270
snap-max-soft-limit is a percentage value, which is applied on the "Effective snap-max-hard-limit" to get the "Effective snap-max-soft-limit".
271

272
When auto-delete feature is enabled, then upon reaching the "Effective snap-max-soft-limit", with every successful snapshot creation, the oldest snapshot will be deleted.
273

274
When auto-delete feature is disabled, then upon reaching the "Effective snap-max-soft-limit", the user gets a warning with every successful snapshot creation.
275

276
When auto-delete feature is disabled, then upon reaching the "Effective snap-max-hard-limit", further  snapshot  creations  will not be allowed.
277

278
activate-on-create is disabled by default. If you enable activate-on-create, then further snapshot will be activated during the time of snapshot creation.
279
.TP
280
\fB\ snapshot activate <snapname> \fR
281
Activates the mentioned snapshot.
282

283
Note : By default the snapshot is activated during snapshot creation.
284
.TP
285
\fB\ snapshot deactivate <snapname> \fR
286
Deactivates the mentioned snapshot.
287
.TP
288
\fB\ snapshot help \fR
289
Display help for the snapshot commands.
290
.SS "Self-heal Commands"
291
.TP
292
\fB\ volume heal <VOLNAME>\fR
293
Triggers index self heal for the files that need healing.
294

295
.TP
296
\fB\ volume heal  <VOLNAME> [enable | disable]\fR
297
Enable/disable self-heal-daemon for volume <VOLNAME>.
298

299
.TP
300
\fB\ volume heal <VOLNAME> full\fR
301
Triggers self heal on all the files.
302

303
.TP
304
\fB\ volume heal <VOLNAME> info \fR
305
Lists the files that need healing.
306

307
.TP
308
\fB\ volume heal <VOLNAME> info split-brain \fR
309
Lists the files which are in split-brain state.
310

311
.TP
312
\fB\ volume heal <VOLNAME> statistics \fR
313
Lists the crawl statistics.
314

315
.TP
316
\fB\ volume heal <VOLNAME> statistics heal-count \fR
317
Displays the count of files to be healed.
318

319
.TP
320
\fB\ volume heal <VOLNAME> statistics heal-count replica <HOSTNAME:BRICKNAME> \fR
321
Displays the number of files to be healed from a particular replica subvolume to which the brick <HOSTNAME:BRICKNAME> belongs.
322

323
.TP
324
\fB\ volume heal <VOLNAME> split-brain bigger-file <FILE> \fR
325
Performs healing of <FILE> which is in split-brain by choosing the bigger file in the replica as source.
326

327
.TP
328
\fB\ volume heal <VOLNAME> split-brain source-brick <HOSTNAME:BRICKNAME> \fR
329
Selects <HOSTNAME:BRICKNAME> as the source for all the files that are in split-brain in that replica and heals them.
330

331
.TP
332
\fB\ volume heal <VOLNAME> split-brain source-brick <HOSTNAME:BRICKNAME> <FILE> \fR
333
Selects the split-brained <FILE> present in <HOSTNAME:BRICKNAME> as source and completes heal.
334
.SS "Other Commands"
335
.TP
336
\fB\ get-state [<daemon>] [[odir </path/to/output/dir/>] [file <filename>]] [detail|volumeoptions] \fR
337
Get local state representation of mentioned daemon and store data in provided path information
338
.TP
339
\fB\ help \fR
340
Display the command options.
341
.TP
342
\fB\ quit \fR
343
Exit the gluster command line interface.
344

345
.SH FILES
346
/var/lib/glusterd/*
347
.SH SEE ALSO
348
.nf
349
\fBfusermount\fR(1), \fBmount.glusterfs\fR(8), \fBglusterfs\fR(8), \fBglusterd\fR(8)
350
\fR
351
.fi
352
.SH COPYRIGHT
353
.nf
354
Copyright(c) 2006-2011  Gluster, Inc.  <http://www.gluster.com>
355

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.