Keycloak

Форк
0
/
deploy-aws-route53-loadbalancer.adoc 
274 строки · 8.8 Кб
1
<#import "/templates/guide.adoc" as tmpl>
2
<#import "/templates/links.adoc" as links>
3

4
<@tmpl.guide
5
title="Deploy an AWS Route 53 loadbalancer"
6
summary="Building block for a loadbalancer"
7
tileVisible="false" >
8

9
This topic describes the procedure required to configure DNS based failover for Multi-AZ {project_name} clusters using AWS Route53 for an active/passive setup. These instructions are intended to be used with the setup described in the <@links.ha id="concepts-active-passive-sync"/> {section}.
10
Use it together with the other building blocks outlined in the <@links.ha id="bblocks-active-passive-sync"/> {section}.
11

12
include::partials/blueprint-disclaimer.adoc[]
13

14
== Architecture
15

16
All {project_name} client requests are routed by a DNS name managed by Route53 records.
17
Route53 is responsibile to ensure that all client requests are routed to the Primary cluster when it is available and healthy, or to the backup cluster in the event of the primary availability-zone or {project_name} deployment failing.
18

19
If the primary site fails, the DNS changes will need to propagate to the clients.
20
Depending on the client's settings, the propagation may take some minutes based on the client's configuration.
21
When using mobile connections, some internet providers might not respect the TTL of the DNS entries, which can lead to an extended time before the clients can connect to the new site.
22

23
.AWS Global Accelerator Failover
24
image::high-availability/route53-multi-az-failover.svg[]
25

26
Two Openshift Routes are exposed on both the Primary and Backup ROSA cluster.
27
The first Route uses the Route53 DNS name to service client requests, whereas the second Route is used by Route53 to monitor the health of the {project_name} cluster.
28

29
== Prerequisites
30

31
* Deployment of {project_name} as described in <@links.ha id="deploy-keycloak-kubernetes" /> on a ROSA cluster running OpenShift 4.14 or later in two AWS availability zones in AWS one region.
32
* An owned domain for client requests to be routed through.
33

34
== Procedure
35

36
. [[create-hosted-zone]]Create a https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingHostedZone.html[Route53 Hosted Zone] using the root domain name through which you want all {project_name} clients to connect.
37
+
38
Take note of the "Hosted zone ID", because this ID is required in later steps.
39

40
. Retrieve the "Hosted zone ID" and DNS name associated with each ROSA cluster.
41
+
42
For both the Primary and Backup cluster, perform the following steps:
43
+
44
.. Log in to the ROSA cluster.
45
+
46
.. Retrieve the cluster LoadBalancer Hosted Zone ID and DNS hostname
47
+
48
.Command:
49
[source,bash]
50
----
51
<#noparse>
52
HOSTNAME=$(oc -n openshift-ingress get svc router-default \
53
-o jsonpath='{.status.loadBalancer.ingress[].hostname}'
54
)
55
aws elbv2 describe-load-balancers \
56
--query "LoadBalancers[?DNSName=='${HOSTNAME}'].{CanonicalHostedZoneId:CanonicalHostedZoneId,DNSName:DNSName}" \
57
--region eu-west-1 \#<1>
58
--output json
59
</#noparse>
60
----
61
<1> The AWS region hosting your ROSA cluster
62
+
63
.Output:
64
[source,json]
65
----
66
[
67
    {
68
        "CanonicalHostedZoneId": "Z2IFOLAFXWLO4F",
69
        "DNSName": "ad62c8d2fcffa4d54aec7ffff902c925-61f5d3e1cbdc5d42.elb.eu-west-1.amazonaws.com"
70
    }
71
]
72
----
73
+
74
NOTE: ROSA clusters running OpenShift 4.13 and earlier use classic load balancers instead of application load balancers. Use the `aws elb describe-load-balancers` command and an updated query string instead.
75

76
. Create Route53 health checks
77
+
78
.Command:
79
[source,bash]
80
----
81
<#noparse>
82
function createHealthCheck() {
83
  # Creating a hash of the caller reference to allow for names longer than 64 characters
84
  REF=($(echo $1 | sha1sum ))
85
  aws route53 create-health-check \
86
  --caller-reference "$REF" \
87
  --query "HealthCheck.Id" \
88
  --no-cli-pager \
89
  --output text \
90
  --health-check-config '
91
  {
92
    "Type": "HTTPS",
93
    "ResourcePath": "/lb-check",
94
    "FullyQualifiedDomainName": "'$1'",
95
    "Port": 443,
96
    "RequestInterval": 30,
97
    "FailureThreshold": 1,
98
    "EnableSNI": true
99
  }
100
  '
101
}
102
CLIENT_DOMAIN="client.keycloak-benchmark.com" #<1>
103
PRIMARY_DOMAIN="primary.${CLIENT_DOMAIN}" #<2>
104
BACKUP_DOMAIN="backup.${CLIENT_DOMAIN}" #<3>
105
createHealthCheck ${PRIMARY_DOMAIN}
106
createHealthCheck ${BACKUP_DOMAIN}
107
</#noparse>
108
----
109
<1> The domain which {project_name} clients should connect to.
110
This should be the same, or a subdomain, of the root domain used to create the xref:create-hosted-zone[Hosted Zone].
111
<2> The subdomain that will be used for health probes on the Primary cluster
112
<3> The subdomain that will be used for health probes on the Backup cluster
113
+
114
.Output:
115
[source,bash]
116
----
117
233e180f-f023-45a3-954e-415303f21eab #<1>
118
799e2cbb-43ae-4848-9b72-0d9173f04912 #<2>
119
----
120
<1> The ID of the Primary Health check
121
<2> The ID of the Backup Health check
122
+
123
. Create the Route53 record set
124
+
125
.Command:
126
[source,bash]
127
----
128
<#noparse>
129
HOSTED_ZONE_ID="Z09084361B6LKQQRCVBEY" #<1>
130
PRIMARY_LB_HOSTED_ZONE_ID="Z2IFOLAFXWLO4F"
131
PRIMARY_LB_DNS=ad62c8d2fcffa4d54aec7ffff902c925-61f5d3e1cbdc5d42.elb.eu-west-1.amazonaws.com
132
PRIMARY_HEALTH_ID=233e180f-f023-45a3-954e-415303f21eab
133
BACKUP_LB_HOSTED_ZONE_ID="Z2IFOLAFXWLO4F"
134
BACKUP_LB_DNS=a184a0e02a5d44a9194e517c12c2b0ec-1203036292.elb.eu-west-1.amazonaws.com
135
BACKUP_HEALTH_ID=799e2cbb-43ae-4848-9b72-0d9173f04912
136
aws route53 change-resource-record-sets \
137
  --hosted-zone-id Z09084361B6LKQQRCVBEY \
138
  --query "ChangeInfo.Id" \
139
  --output text \
140
  --change-batch '
141
  {
142
    "Comment": "Creating Record Set for '${CLIENT_DOMAIN}'",
143
  	"Changes": [{
144
  		"Action": "CREATE",
145
  		"ResourceRecordSet": {
146
  			"Name": "'${PRIMARY_DOMAIN}'",
147
  			"Type": "A",
148
        "AliasTarget": {
149
          "HostedZoneId": "'${PRIMARY_LB_HOSTED_ZONE_ID}'",
150
          "DNSName": "'${PRIMARY_LB_DNS}'",
151
          "EvaluateTargetHealth": true
152
        }
153
  		}
154
  	}, {
155
  		"Action": "CREATE",
156
  		"ResourceRecordSet": {
157
  			"Name": "'${BACKUP_DOMAIN}'",
158
  			"Type": "A",
159
        "AliasTarget": {
160
          "HostedZoneId": "'${BACKUP_LB_HOSTED_ZONE_ID}'",
161
          "DNSName": "'${BACKUP_LB_DNS}'",
162
          "EvaluateTargetHealth": true
163
        }
164
  		}
165
  	}, {
166
  		"Action": "CREATE",
167
  		"ResourceRecordSet": {
168
  			"Name": "'${CLIENT_DOMAIN}'",
169
  			"Type": "A",
170
        "SetIdentifier": "client-failover-primary-'${SUBDOMAIN}'",
171
        "Failover": "PRIMARY",
172
        "HealthCheckId": "'${PRIMARY_HEALTH_ID}'",
173
        "AliasTarget": {
174
          "HostedZoneId": "'${HOSTED_ZONE_ID}'",
175
          "DNSName": "'${PRIMARY_DOMAIN}'",
176
          "EvaluateTargetHealth": true
177
        }
178
  		}
179
  	}, {
180
  		"Action": "CREATE",
181
  		"ResourceRecordSet": {
182
  			"Name": "'${CLIENT_DOMAIN}'",
183
  			"Type": "A",
184
        "SetIdentifier": "client-failover-backup-'${SUBDOMAIN}'",
185
        "Failover": "SECONDARY",
186
        "HealthCheckId": "'${BACKUP_HEALTH_ID}'",
187
        "AliasTarget": {
188
          "HostedZoneId": "'${HOSTED_ZONE_ID}'",
189
          "DNSName": "'${BACKUP_DOMAIN}'",
190
          "EvaluateTargetHealth": true
191
        }
192
  		}
193
  	}]
194
  }
195
  '
196
</#noparse>
197
----
198
<1> The ID of the xref:create-hosted-zone[Hosted Zone] created earlier
199
+
200
.Output:
201
[source]
202
----
203
/change/C053410633T95FR9WN3YI
204
----
205
+
206
. Wait for the Route53 records to be updated
207
+
208
.Command:
209
[source,bash]
210
----
211
aws route53 wait resource-record-sets-changed --id /change/C053410633T95FR9WN3YI
212
----
213
+
214
. Update or create the {project_name} deployment
215
+
216
For both the Primary and Backup cluster, perform the following steps:
217
+
218
.. Log in to the ROSA cluster
219
+
220
.. Ensure the {project_name} CR has the following configuration
221
+
222
[source,yaml]
223
----
224
<#noparse>
225
apiVersion: k8s.keycloak.org/v2alpha1
226
kind: {project_name}
227
metadata:
228
  name: keycloak
229
spec:
230
  hostname:
231
    hostname: ${CLIENT_DOMAIN} # <1>
232
</#noparse>
233
----
234
<1> The domain clients used to connect to {project_name}
235
+
236
To ensure that request forwarding works, edit the {project_name} CR to specify the hostname through which clients will access the {project_name} instances.
237
This hostname must be the `$CLIENT_DOMAIN` used in the Route53 configuration.
238
+
239
.. Create health check Route
240
+
241
.Command:
242
[source,bash]
243
----
244
cat <<EOF | kubectl apply -n $NAMESPACE -f - #<1>
245
apiVersion: route.openshift.io/v1
246
kind: Route
247
metadata:
248
  name: aws-health-route
249
spec:
250
  host: $DOMAIN #<2>
251
  port:
252
    targetPort: https
253
  tls:
254
    insecureEdgeTerminationPolicy: Redirect
255
    termination: passthrough
256
  to:
257
    kind: Service
258
    name: keycloak-service
259
    weight: 100
260
  wildcardPolicy: None
261

262
EOF
263
----
264
<1> `$NAMESPACE` should be replaced with the namespace of your {project_name} deployment
265
<2> `$DOMAIN` should be replaced with either the `PRIMARY_DOMAIN` or `BACKUP_DOMAIN`, if the current cluster is the Primary of Backup cluster, respectively.
266

267
== Verify
268

269
Navigate to the chosen CLIENT_DOMAIN in your local browser and log in to the {project_name} console.
270

271
To test failover works as expected, log in to the Primary cluster and scale the {project_name} deployment to zero Pods.
272
Scaling will cause the Primary's health checks to fail and Route53 should start routing traffic to the {project_name} Pods on the Backup cluster.
273

274
</@tmpl.guide>
275

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.