欢迎光临散文网 会员登陆 & 注册

metro node初始化文档(一)

2023-07-27 23:45 作者:墨小胎  | 我要投稿

一、硬件安装


 

1.1安装metro node服务器

1.参考《Dell EMC导轨安装指南》将服务器安装到机柜中:

a.拆卸滑道,在目标机柜中连续两个1U位置安装。

b.将服务器节点从机箱中卸下,安装在已安装好的滑道上。

说明:挂载时,请将下方的Dell Service Tag (DST)放在上方。

2.在每个服务器上安装一个液晶面板。


3.请保存10GbE SFP光模块,以便后续安装。

 

1.2安装电缆管理组件

1.CMA附件支架连接到每个服务器的服务器滑道上。


2.CMA托盘连接到每个服务器的服务器滑道上。


3.将预布线线缆管理臂套件安装在每个服务器节点的服务器滑道组件上。

a.CMA内卡(1)连接到左侧内滑轨。

b.CMA外卡(2)连接到左侧外固定轨。

c.重复执行。

d.用维可牢搭扣将线缆环和红色业务线缆固定在机架一侧。

注意:确保线环在机架上支撑牢固,不拉扯管理臂内的电缆。



1.3连接CMA套件线缆到服务器节点

下面的Port Map Label图像显示了以下步骤的端口分配:

1.安装WAN SFP光模块到每个节点的2WAN接口。

a.110GbE SFP光模块插入端口图中标识为“WAN1”的端口。

b.110GbE SFP光模块插入端口图中标识为“WAN2”的端口。

c.为确认这些锁紧牢固,轻轻拉动每个SFP I/O模块。如果SFP松动:

1.打开弹簧锁。2.SFP光模块完全插入。3.关闭弹簧锁。4.对每个节点重复操作。

注意:对于本地系统配置:a)WAN SFP安装在WAN端口,以妥善保管和/或将来可能使用;b)确保电缆插头已到位,在不使用时保护SFP

2.LCOM DAC (Direct Attach Copper)线缆连接到每个节点的两个LCOM端口。

a.将标识为LCOM1的黑色线缆插入端口图中标识为LCOM1的端口。在每个节点上重复。

b.将标识为LCOM2的黑色线缆连接到端口图中标识为LCOM2的端口。在每个节点上重复。

c.在各节点连接处轻拉线缆,确认线缆安装牢固。

3.连接MGMT以太网线至每个节点的两个MGMT接口。

a.将标识为MGMT1的绿色线缆,按照端口图的指示连接至对应的MGMT1端口。在每个节点上重复。

b.将标记为“MGMT2”的紫色线缆,按照端口图,连接到对应的“MGMT2”端口。在每个节点上重复。

c.轻拉各节点连接线缆,确认各节点连接线缆卡扣牢固。

4.连接交流电源线至各节点的电源模块。

a.将黑色电源线连接到PSU1/接口图上的黑色“AC power”块上。

b.将灰色电源线连接到端口图上的PSU2/灰色“AC power”块。

c.重复执行。

d.检查电源线走线路径是否允许服务器节点在滑道上自由移动。

注意:如图所示,将电源线套入圈内,用魔术贴固定在电源模块拉手上。请确保皮带抬起电源线,防止服务器在使用过程中伸展时绑扎。

1.4将电源线连接到备用电源

黑色和灰色电缆分别连接到公用事业电源。

注意:HA维护需要冗余电源连接。确保灰色和黑色的电源线分别连接到不同的电源。

1.将灰色电源线连接到公用事业设备A PDU (Power Distribution Unit)上。

2.将黑色电源线连接到配电盒B上。

 

1.5检查motro node连线及电源

1.确保每个节点都从两个电源分区供电。确认各节点电源模块的电源拉手均为绿色常亮。

注意:如果电源指示灯不亮,请检查电源线连接。如果不为绿色常亮,请参考《PowerEdge R640安装和服务手册(https://topics-cdn.dell.com/pdf/poweredge-r640_Owners-Manual_en-us.pdf)》了解电源指示灯的状态。


2.打开两台服务器,并在左侧前面板检查健康状态。

a.确认电源开关指示灯为绿色(右前)

b.确认左侧前方没有琥珀色的健康状态指示灯。


3.检查MGMT1MGMT2端口的LINK指示灯是否亮绿灯。


4.确认LCOM1LCOM2端口的LINK指示灯为亮绿色。


1.6建立Local连接

后续连接时,所有线缆需穿过CMA组件走线至机柜侧面。使用魔术贴将电缆固定在CMA和机柜侧面。

注意:所有节点连接必须通过CMA组件路由,以保证业务运行时的HA。提供足够的长度以保持手臂运动和电缆健康。

1.将客户提供的光纤电缆从前端和后端san连接到两个控制器上的适当metro node端口。

注意:为避免损坏或污染,请勿触摸光纤电缆的两端

注意:请使用冗余的光纤物理链路将每台主机连接到metro nodedirector节点,每个metro nodedirector节点连接到后端存储。为防止数据不可用,请确保存储视图下的每个主机都有到集群中两个控制器的路径,且多路径配置可以均匀分布到控制器A和控制器B


注意:HBA1相比,HBA2上的光纤连接是反向的。

2.metro node CUST接口连接到双director的客户IP网络 


1.7建立Metro连接

注意:本节仅适用于双集群' metro '系统配置。关于“local”系统配置,请跳过本节。

后续连接时,所有线缆需穿过CMA组件走线至机柜侧面。使用魔术贴将电缆固定在CMA和机柜侧面。

注意:所有节点连接必须经过CMA(机械臂),以便业务运行过程中进行HA维护。

1.连接集群间广域网IP连接。确保每个节点(director)有两条独立的路径到另一个集群的每个节点(director)Metro节点只支持光纤IP WAN连接。



二、初始化准备

2.1准备信息

元数据卷准备

1.元卷:从两个不同的存储阵列导出4个大小为>= 80GB的存储卷。

2.日志卷:从两个不同的存储阵列导出4个大小为>=5GB & <=20GB的存储卷(Metro模式)

 

系统配置的用户输入详细信息




IP地址规划

 


 

2.2连接节点

从服务笔记本电脑连接到节点有两种方式:

1. 通过SVC接口连接。

注意:对于Dell发货的metro node7.0版本,通过SVC端口连接不工作。

2. 通过iDRAC Direct Micro-USB接口连接。

 

2.2.1通过SVC接口连接

1.如果机柜中有笔记本电脑托架,则适用此步骤。遵循以下步骤:

在机柜前方,取下U23位置(距机柜地面38.5英寸)的假面板。将笔记本电脑托盘滑出,放置在托盘上。

若要松开扎带,请按下红色业务电缆固定在托盘上的卡扣。

将业务线缆连接到笔记本电脑的以太网接口。

另一端连接到节点的SVC接口。

2.在笔记本电脑上打开浏览器,登录http://128.228.221.2。请按照页面提示操作,然后转到步骤3

注意:如果看不到web页面,请确认笔记本电脑已启用DHCP

3.在笔记本电脑上运行“PuTTY.exe”

说明:本文档中有几个任务使用PuTTY工具登录管理服务器。您可以使用任何类似的Telnet/SSH客户端。确保对任何客户机使用SSH协议(版本2)。如果看到PuTTY安全警告,请阅读它,然后单击Yes

4.“PuTTY Configuration”窗口中,设置如下图所示。

5.“Category”中选择“SSH”,确保“Preferred SSH protocol version”设置为“2”

6.类别列表中选择会话2 .“PuTTY Configuration”窗口单击“Save”,保存会话设置。

7.“PuTTY Configuration”界面,单击“Open”,以“Service”用户名和业务密码(业务密码请咨询系统管理员)登录节点。

注意:用户名服务的默认shell提示符是service@localhost:~>

8.转到初始化步骤部分。

2.2.2通过iDRAC Direct Micro-USB接口连接

iDRAC直接USB-NIC功能允许直接访问iDRAC web接口,而不需要使用iDRAC网络系统管理端口连接。

1..准备一根Micro-USB 2.0线缆,将线缆的一端连接到系统前面板的“iDRAC Direct Micro-USB”接口。将线缆的另一端连接到业务笔记本电脑的USB接口。

2.启动任何浏览器,然后在地址栏中键入https://169.254.0.3

3.跳过证书错误,选择继续浏览此网站

4. 输入iDRAC凭据。

5.打开虚拟控制台,然后单击启动虚拟控制台。

6.使用服务凭据登录并转到初始化步骤部分。

 

三、初始化步骤

3.1预配置

注意:

1.该阶段为各节点设置EC-01端口的IP地址和修改业务用户的默认密码。

2.每个节点都必须执行此步骤。

 

使用service用户登录Node-1-1-A

注意:预配置时,所有节点均显示EULA的内容。为了执行系统配置,用户必须接受EULA。由于内容较多,所以下面是使用EULA预配置的控制台输出的最短版本。

警告:不要在已经配置好的系统上运行vplex_system_config --startvplex_system_config -s命令,因为它会导致节点固件被关闭,并导致完全集群宕机。

 

service@localhost:/opt/dell/vplex/system_config> vplex_system_config --interview

 

Taking backup of existing SCIF has been started...

Taking backup of existing SCIF has been completed.

Congratulations on your new Dell EMC purchase!

Your purchase and use of this Dell EMC product is subject to and governed by the Dell EMC Commercial Terms of Sale, unless you have a separate written agreement with Dell EMC that specifically applies to your order, and the Dell End User License Agreement (EULA), which are each presented below in the following order:

 

* Commercial Terms of Sale

* End User License Agreement (EULA)

 

The Commercial Terms of Sale for the United States are presented below and are also available online at the website below that corresponds to the country in which this product was purchased. By the act of clicking “I accept,” you agree (or re-affirm your agreement to) the foregoing terms and conditions. To the extent that Dell Inc. or any Dell Inc.’s direct or indirect subsidiary (“Dell”) is deemed under applicable law to have accepted an offer by you: (a) Dell hereby objects to and rejects all additional or inconsistent terms that may be contained in any purchase order or other documentation submitted by you in connection with your order; and (b) Dell hereby conditions its acceptance on your assent that the foregoing terms and conditions shall exclusively control.


IF YOU DO NOT AGREE WITH THESE TERMS, DO NOT USE THIS PRODUCT AND CONTACT YOUR DELL REPRESENTATIVE WITHIN FIVE BUSINESS DAYS TO ARRANGE A RETURN.

 

Commercial Terms of Sale

 

These Commercial Terms of Sale (“CTS”) apply to orders for hardware, software, and services by direct commercial and public sector purchasers and to commercial end-users who purchase through a reseller (“Customer”), unless Customer and Suppliers (defined below) have entered into a separate written agreement that applies to Customer’s orders for specific products or services, in which case, the separate written agreement governs Customer’s purchase and use of such specific products or services.

The term “Supplier(s)” means, as applicable:

 

EMC Corporation (“EMC”)

176 South Street

Hopkinton, Massachusetts 01748

 

and

 

Dell Marketing L.P. (“Dell”)

One Dell Way

Round Rock, Texas 78682

Legal Notices:

Dell_Legal_Notices@Dell.com

 

1. Subject Matter and Parts of CTS.

1.1 Scope. This CTS governs Customer’s procurement and Supplier’s provisioning of Products, Services and Third Party Products (if applicable) (collectively “Offerings”), for Customer’s own internal use.

1.2 Products and Services. “Products” are either: (i) Supplier-branded IT hardware products (“Equipment”) or (ii) Supplier-branded generally available software, whether microcode, firmware, operating systems or applications (“Software”). “Services” are:(a) Supplier’s standard service offerings for maintenance and support of Products (“Support Services”) and (b) consulting, deployment, implementation and any other services that are not Support Services (“Professional Services”). “Third Party Products” means hardware, software, products, or services that are not “Dell” or “Dell EMC” branded. Products exclude Services and Third Party Products.
1.3 Framework. This CTS consists of the main body with the terms and conditions applicable to all Offerings that are in scope, as may be supplemented by additional schedules, containing terms applicable to all or only specific Offerings and shall form an integral part of this CTS (“Schedule(s)”). This CTS does not establish a commitment
of Customer to procure, nor an obligation of Supplier or Affiliate to supply, any Offerings unless the parties have agreed on an Order (as defined below).

1.4 Affiliates. Transactions under this CTS may also involve Dell Inc. or Dell Inc.’s direct or indirect subsidiaries (“Affiliates”).

.

.

.

.

13.7 Entire Agreement. You acknowledge that You have read this EULA, that You understand it, that You agree to be bound by its terms, and that this EULA, along with the Order Terms into which this EULA may be incorporated (as applicable), is the complete and exclusive statement of the agreement between You and Licensor regarding
Your use of the Software. All content referenced in this EULA by hyperlink is incorporated into this EULA in its entirety and is available to You in hardcopy form upon Your request. The pre-printed terms of Your purchase order or any other document that is not issued or signed by Licensor or Dell do not apply to Software. You represent that You did not rely on any representations or statements that do not appear in this EULA when accepting this EULA.

 

Rev.09Sept2020

 

Please press enter to accept End User License Agreement to start the system configuration (default: y):<回车>

 

再次输入命令vplex_system_config –interview开始配置管理IP

service@director-1-1-a:~> vplex_system_config --interview

Taking backup of existing SCIF has been started...

Taking backup of existing SCIF has been completed.

 

Pre-config phase interview process is started...

Creating System Configuration Inventory File at /etc/opt/dell/vplex/scif.yml

 

VPLEX system details

Enter node's EC01 port details:

IP Address : 10.226.81.155

IP Address is validating, please wait...

Node is already configured with same IP address "10.226.81.155"

Netmask : 255.255.248.0

Gateway : 10.226.80.1

MTU (default: 1500): <回车>

Enter the service user password:

Current password :<回车>

New password :Mi@Dim7T

Retype new password :Mi@Dim7T

Password has been changed successfully for the 'service' user

################### To receive the highest level of support, inform Dell EMC support if there is a change in the service password ###################

################### The service user's password must be the same on all nodes in a cluster ###################

Please review the EC01 port details entered.

clusters:

nodes:

8K5ZY23

interfaces:

EC-01

ip:

address: 10.226.81.155

netmask: 255.255.248.0

gateway: 10.226.80.1

mtu: 1500

Please review the above details. Do you want to proceed(y/n)? (default: y):<回车>

IP configuration is successful on this 8K5ZY23

 

 

按照上述步骤,配置其他节点管理IP地址

 

3.2配置会话脚本(阶段1)

注意:在所有节点上成功完成预配置阶段后,可以从任何节点执行系统配置命令。为了便于后续使用,请确保所有系统配置命令都在同一个节点上执行。以下示例中,所有系统配置命令均使用Node-1-1-B执行。

1.使用service用户登录Node-2-1-B

2.使用vplex_system_config --interviewvplex_system_config -i命令执行前置会话阶段。

 

service@localhost:~> vplex_system_config –i

Do you want to re-run the pre-config phase(y/n)?

Press "y" to re-run or press "enter" to proceed with phase-1 interview. (default: n): <回车>

 

Taking backup of existing SCIF has been started...

Taking backup of existing SCIF has been completed.

 

Phase-1 interview process is started...

Using system configuration file at /etc/opt/dell/vplex/scif.yml

 

VPLEX system details

Choose VPlex configuration.

1. Local

2. Metro

Please select configuration (default: local): 2  //如果是本地集群选择1

Enter 'cluster-1' details:

Service user password : Mi@Dim7T

Re-enter service user password : Mi@Dim7T

Enter 'cluster-1' node details:

Enter 'node-1-1-a' details:

IP address : 10.226.81.155

Enter 'node-1-1-b' details:

IP address : 10.226.81.156

//如果是本地集群则没有下方cluster-2配置

Enter 'cluster-2' details:

Service user password :

Re-enter service user password :

Enter 'cluster-2' node details:

Enter 'node-2-1-a' details:

IP address : 10.226.81.157

Enter 'node-2-1-b' details:

IP address : 10.226.81.158

Enabling passwordless SSH between the nodes...

Creating .ssh/ directory on node(node-1-1-a) with 10.226.81.155

Creating .ssh/ directory on node(node-1-1-b) with 10.226.81.156

Creating .ssh/ directory on node(node-2-1-a) with 10.226.81.157

Creating .ssh/ directory on node(node-2-1-b) with 10.226.81.158

Generating SSH key on node(node-1-1-a), with 10.226.81.155

Generating SSH key on node(node-1-1-b), with 10.226.81.156

Generating SSH key on node(node-2-1-a), with 10.226.81.157

Generating SSH key on node(node-2-1-b), with 10.226.81.158

Exchanging the key with other nodes from node(node-1-1-a), with 10.226.81.155

Exchanging the key with other nodes from node(node-1-1-b), with 10.226.81.156

Exchanging the key with other nodes from node(node-2-1-a), with 10.226.81.157

Exchanging the key with other nodes from node(node-2-1-b), with 10.226.81.158

Enabled passwordless SSH successfully between the nodes.

Enter NTP server 1 address (default: 127.0.0.1): <回车>

Enter NTP server 2 address (default: ): <回车>

 

Connecting to 10.226.81.155 to fetch gateway and netmask details...

gateway and netmask have been fetched successfully for the node 10.226.81.155

 

Connecting to 10.226.81.156 to fetch gateway and netmask details...

gateway and netmask have been fetched successfully for the node 10.226.81.156

 

Connecting to 10.226.81.157 to fetch gateway and netmask details...

gateway and netmask have been fetched successfully for the node 10.226.81.157

 

Connecting to 10.226.81.158 to fetch gateway and netmask details...

gateway and netmask have been fetched successfully for the node 10.226.81.158

 

Dell service tag and WWN Seed has been read sucessfully for the node 10.226.81.155

 

Dell service tag and WWN Seed has been read sucessfully for the node 10.226.81.156

 

GUID has been read successfully for cluster-1

 

Dell service tag and WWN Seed has been read sucessfully for the node 10.226.81.157

 

Dell service tag and WWN Seed has been read sucessfully for the node 10.226.81.158

 

GUID has been read successfully for cluster-2

 

Please review the details entered.

ntp_server_1: 127.0.0.1

ntp_server_2:

snmp:

clusters

name: cluster-1

director_count: 2

instance: 1

id: 1

guid: 43A52L9

ldap:

call_home:

nodes

8K16Z23

name: director-1-1-A

wwn_seed: 2D60002E

interfaces:

EC-01

role: cust

ip:

address: 10.226.81.155

netmask: 255.255.248.0

gateway: 10.226.80.1

mtu: 1500

LC-00

role: com

ip:

address: 128.221.250.35

netmask: 255.255.255.224

gateway:

mtu: 1500

LC-01

role: com

ip:

address: 128.221.251.35

netmask: 255.255.255.224

gateway:

mtu: 1500

MC-00

role: man

ip:

address: 128.221.252.35

netmask: 255.255.255.224

gateway:

mtu: 1500

MC-01

role: man

ip:

address: 128.221.253.35

netmask: 255.255.255.224

gateway:

mtu: 1500

8K14Z23

name: director-1-1-B

wwn_seed: 2D60002F

interfaces:

EC-01

role: cust

ip:

address: 10.226.81.156

netmask: 255.255.248.0

gateway: 10.226.80.1

mtu: 1500

LC-00

role: com

ip:

address: 128.221.250.36

netmask: 255.255.255.224

gateway:

mtu: 1500

LC-01

role: com

ip:

address: 128.221.251.36

netmask: 255.255.255.224

gateway:

mtu: 1500

MC-00

role: man

ip:

address: 128.221.252.36

netmask: 255.255.255.224

gateway:

mtu: 1500

MC-01

role: man

ip:

address: 128.221.253.36

netmask: 255.255.255.224

gateway:

mtu: 1500

//如果是本地集群则没有下方cluster-2配置输出

name: cluster-2

director_count: 2

instance: 1

id: 2

guid: 43A22L9

ldap:

call_home:

nodes

8K17Z23

name: director-2-1-A

wwn_seed: 2D600030

interfaces:

EC-01

role: cust

ip:

address: 10.226.81.157

netmask: 255.255.248.0

gateway: 10.226.80.1

mtu: 1500

LC-00

role: com

ip:

ddress: 128.221.250.67

netmask: 255.255.255.224

gateway:

mtu: 1500

LC-01

role: com

ip:

address: 128.221.251.67

netmask: 255.255.255.224

gateway:

mtu: 1500

MC-00

role: man

ip:

address: 128.221.252.67

netmask: 255.255.255.224

gateway:

mtu: 1500

MC-01

role: man

ip:

address: 128.221.253.67

netmask: 255.255.255.224

gateway:

mtu: 1500

8K11Z23

name: director-2-1-B

wwn_seed: 2D600031

interfaces:

EC-01

role: cust

ip:

address: 10.226.81.158

netmask: 255.255.248.0

gateway: 10.226.80.1

mtu: 1500

LC-00

role: com

ip:

address: 128.221.250.68

netmask: 255.255.255.224

gateway:

mtu: 1500

LC-01

role: com

ip:

address: 128.221.251.68

netmask: 255.255.255.224

gateway:

mtu: 1500

MC-00

role: man

ip:

address: 128.221.252.68

netmask: 255.255.255.224

gateway:

mtu: 1500

MC-01

role: man

ip:

address: 128.221.253.68

netmask: 255.255.255.224

gateway:

mtu: 1500

 

Please review the above details.

Do you want to proceed(y/n)? (default: y): <回车>

 

SCIF has been copied to ['10.226.81.156', '10.226.81.155', '10.226.81.158','10.226.81.157'] successfully.

 

Phase-1 interview process is completed.

 

To start the phase-1 configuration, run the command "vplex_system_config -s".

 

service@localhost:~>

 

3.一旦过程顺利完成,然后进入下一个任务。

 

3.3应用会话脚本(阶段1 )

注意:请确保后端线缆连接到SAN。如果端口显示no-link,则在验证BE端口状态时脚本失败。

1. 需要启动system_config第一阶段进程,使用vplex_system_config -s命令。

回显信息分为以下两部分:命令的开始部分和结束部分。

 

service@localhost:~> vplex_system_config –s

Starting the system configuration process for phase 1...

[WARNING]: Skipping plugin (/usr/lib/python3.6/site-packages/ansible/plugins/connection/saltstack.py) as it seems to be invalid: The 'cryptography' distribution was not found and is required by ansible
PLAY [localhost]
**************************************************************************************
****************************************************
TASK [Gathering Facts]
**************************************************************************************
**********************************************
ok: [localhost]

PLAY [cluster*]
**************************************************************************************
*****************************************************
TASK [Gathering Facts]
**************************************************************************************
**********************************************
[WARNING]: Platform linux on host 8K14Z23 is using the discovered Python interpreter at /usr/bin/python3.6, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. ok: [8K14Z23]

[WARNING]: Platform linux on host 8K17Z23 is using the discovered Python interpreter at /usr/bin/python3.6, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. ok: [8K17Z23]

[WARNING]: Platform linux on host 8K11Z23 is using the discovered Python interpreter at /usr/bin/python3.6, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. ok: [8K11Z23]

[WARNING]: Platform linux on host 8KJ0Z23 is using the discovered Python interpreter at /usr/bin/python3.6, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. ok: [8KJ0Z23]

TASK [cfg_prechecks : Execute multi-user.target command]
**************************************************************************************
[WARNING]: Consider using 'become', 'become_method', and 'become_user' rather than running sudo
ok: [8KJ0Z23]
ok: [8K17Z23]
ok: [8K11Z23]
ok: [8K14Z23]
TASK [cfg_prechecks : validating the multi user target status]
**************************************************************************************
skipping: [8K14Z23]
skipping: [8KJ0Z23]
skipping: [8K11Z23]
skipping: [8K17Z23]
TASK [cfg_display_be_ports : Show wwn details]
**************************************************************************************
skipping: [8K14Z23]
ok: [8KJ0Z23] => {
"msg": "WWN details are: {'Cluster 1': ['0xc001445a802e0800',
'0xc001445a802f0800', '0xc001445a80300800', '0xc001445a80310800',
'0xc001445a802e0900', '0xc001445a802f0900', '0xc001445a80300900',
'0xc001445a80310900'], 'Cluster 2': ['0xc001445a80300800', '0xc001445a80310800',
'0xc001445a80300900', '0xc001445a80310900']}"
}
skipping: [8K11Z23]
skipping: [8K17Z23]
RUNNING HANDLER [cfg_private_interfaces : Remove redundant idrac route]
***********************************************************************************
changed: [8KJ0Z23]
changed: [8K14Z23]
changed: [8K11Z23]
changed: [8K17Z23]
PLAY RECAP
**************************************************************************************
**********************************************************
8K11Z23 : ok=98 changed=34 unreachable=0 failed=0 skipped=129 rescued=0 ignored=0
8K14Z23 : ok=105 changed=34 unreachable=0 failed=0 skipped=161 rescued=0 ignored=0
8K17Z23 : ok=98 changed=35 unreachable=0 failed=0 skipped=129 rescued=0 ignored=0
8KJ0Z23 : ok=111 changed=34 unreachable=0 failed=0skipped=116 rescued=0 ignored=0
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0


service@localhost:~>

 

3.4配置iDRAC接入

1.需要启动iDRAC任务,使用vplex_system_config --idrac命令。

回显信息分为以下两部分:命令的开始部分和结束部分。

 

service@director-1-1-b:~> vplex_system_config --idrac

 

Starting the system configuration process for idrac tasks...
PLAY [localhost]
**************************************************************************************
**************************************************************************
TASK [Gathering Facts]
**************************************************************************************
********************************************************************
ok: [localhost]
PLAY [all]
**************************************************************************************
********************************************************************************
TASK [Gathering Facts]
**************************************************************************************
********************************************************************
[WARNING]: Platform linux on host 8K12Z23 is using the discovered Python interpreter at /usr/bin/python3.6, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. ok: [8K12Z23]
[WARNING]: Platform linux on host 8K13Z23 is using the discovered Python interpreter at /usr/bin/python3.6, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. ok: [8K13Z23]
[WARNING]: Platform linux on host 8K10Z23 is using the discovered Python interpreter at /usr/bin/python3.6, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. ok: [8K10Z23]
[WARNING]: Platform linux on host 8K15Z23 is using the discovered Python interpreter at /usr/bin/python3.6, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more information. ok: [8K15Z23]
TASK [cfg_idrac_pwd : include_tasks]
**************************************************************************************
******************************************************

included: /opt/dell/vplex/system_config/ansible/roles/cfg_idrac_pwd/tasks/configure.yml for 8K12Z23, 8K13Z23, 8K10Z23, 8K15Z23
TASK [cfg_idrac_hostname : Setting hostname in iDRAC]
**************************************************************************************
*************************************
ok: [8K13Z23]
ok: [8K12Z23]
ok: [8K10Z23]
ok: [8K15Z23]
TASK [cfg_idrac_hostname : include_tasks]
**************************************************************************************
*************************************************
skipping: [8K12Z23]
skipping: [8K13Z23]
skipping: [8K10Z23]
skipping: [8K15Z23]
PLAY RECAP
**************************************************************************************
********************************************************************************
8K10Z23 : ok=16 changed=4 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
8K12Z23 : ok=17 changed=4 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
8K13Z23 : ok=16 changed=4 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
8K15Z23 : ok=16 changed=4 unreachable=0 failed=0 skipped=3 rescued=0 ignored=0
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
service@localhost:~>

 

2.前面的命令成功后,转到下一个任务。

 

3.5后端路径zone配置

1.使用实例登录vplexi,输入vplexi命令,然后输入以下命令,查看所有的director端口。

service@localhost:/opt/dell/vplex/bin/system_config>vplexcli
Trying ::1...
Connected to localhost.
Escape character is '^]'.

//此处需要输入service密码
VPlexcli:/>
VPlexcli:/> ll clusters/cluster-*/directors/director-*/ports/

 

Example:
VPlexcli:/>
ll clusters/*/directors/*/ports/
/clusters/cluster-1/directors/director-1-1-A/ports:
Name Address Role Status RxPower[uW] TxPower[uW] Temp[C] Speed Topology
----- ------------------ --------- ------ ----------- ----------- ---------------- --------
IO-00       0x0000000000000000         front-end   down        0      0      0        - -
IO-01       0x0000000000000000         front-end   down        0      0      0       - -
IO-02       0xc001445a80360800         back-end   up            0      0      0      16Gbits/s          p2p
IO-03       0xc001445a80360900         back-end   up            0      0      0       16Gbits/s          p2p
LC-00       128.221.250.35|                  local-com  up            -      -      -       10000               -
LC-01       128.221.251.35|                  local-com  up            -      -      -       10000                -
WC-00     0.0.0.0|                               wan-com   down         -      -      -       10000               -
WC-01     0.0.0.0|                               wan-com   down        -      -      -       10000               -
/clusters/cluster-1/directors/director-1-1-B/ports:
Name Address Role Status RxPower[uW] TxPower[uW] Temp[C] Speed Topology
----- ------------------ --------- ------ ----------- ----------- -------
IO-00       0x0000000000000000         front-end   down        0      0      0      -                        -
IO-01       0x0000000000000000         front-end   down        0      0      0      -                        -
IO-02       0xc001445a80370800         back-end   up            0      0      0       16Gbits/s          p2p
IO-03       0xc001445a80370900         back-end   up            0      0      0       16Gbits/s          p2p
LC-00       128.221.250.36|                  local-com  up            -      -      -       10000               -
LC-01       128.221.251.36|                  local-com  up            -      -      -       10000               -
WC-00 0.0.0.0| wan-com down - - -10000 -
WC-01 0.0.0.0| wan-com down - - -10000 -
//
本地集群没有下方cluster-2信息输出

/clusters/cluster-2/directors/director-2-1-A/ports:
Name Address Role Status RxPower[uW] TxPower[uW] Temp[C] Speed Topology
----- ------------------ --------- ------ ----------- ----------- -------
IO-00       0x0000000000000000         front-end   down        0      0      0      -                        -
IO-01       0x0000000000000000         front-end   down        0      0      0      -                        -
IO-02       0xc001445a80380800         back-end   up            0      0      0       16Gbits/s          p2p
IO-03       0xc001445a80380900         back-end   up             0      0      0       16Gbits/s          p2p
LC-00       128.221.250.67|                  local-com  up            -      -      -       10000               -
LC-01       128.221.251.67|                  local-com  up            -      -      -       10000               -
WC-00     0.0.0.0|                               wan-com   down        -      -      -       10000               -
WC-01     0.0.0.0|                               wan-com   down        -      -      -       10000               -
/clusters/cluster-2/directors/director-2-1-B/ports:
Name Address Role Status RxPower[uW] TxPower[uW] Temp[C] Speed Topology
----- ------------------ --------- ------ ----------- ----------- -------
IO-00       0x0000000000000000         front-end   down        0      0      0      -                        -
IO-01       0x0000000000000000         front-end   down        0      0      0      -                        -
IO-02       0xc001445a80390800         back-end   up            0      0      0      16Gbits/s          p2p
IO-03       0xc001445a80390900         back-end   up            0      0      0      16Gbits/s          p2p
LC-00       128.221.250.68|                  local-com  up            -      -      -       10000               -
LC-01       128.221.251.68|                  local-com  up            -      -      -       10000               -

WC-00     0.0.0.0|                               wan-com   down        -      -      -       10000               -
WC-01     0.0.0.0|                               wan-com   down        -      -      -       10000               -

根据输出后端口信息,配置FC交换机zone,将后端口与存储端口相连。

存储识别metro node后端口wwn,创建主机将所有后端口关联到一台主机。例如MetroNode

 

metro node下查看存储

VPlexcli:/> ll clusters/*/storage-elements/storage-arrays/
/clusters/cluster-1/storage-elements/storage-arrays:
Name       Connectivity Status     Auto Switch       Ports         Logical Unit Count
---------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------
DellEMC-PowerStore-7777                          ok    -      0x58ccf090492006ff,          24

0x58ccf098492006ff

DellEMC-PowerStore-FNM00185000184      ok    -      0x58ccf090492006eb,         0

0x58ccf098492006eb

EMC-CLARiiON-FNM00185000853            ok    true  0x5006016448a00948,        302

0x5006016c48a00948

EMC-SYMMETRIX-197900205                  error  -    0x50000973b0033405,        528

0x50000973b0033445

/clusters/cluster-2/storage-elements/storage-arrays:

Name       Connectivity Status     Auto Switch       Ports         Logical Unit Count
---------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------
DellEMC-PowerStore-FNM00185000184      ok    -      0x58ccf090492206eb,         126

0x58ccf090492306eb

EMC-CLARiiON-APM00193751210            ok    true  0x5006016549e034ef,         302

0x5006016d49e034ef

EMC-SYMMETRIX-197900209                  ok    -      0x50000973b00344c4,        526

0x50000973b00344c5

2.请注意后端端口对应的节点wwn号。

集群1wwn示例: 0xc001445a80360800, 0xc001445a80360900, 0xc001445a80370800,0xc001445a80370900

集群2wwn示例: 0xc001445a80380800, 0xc001445a80380900, 0xc001445a80390800,0xc001445a80390900

 

3.对步骤2WWN号为“Voyager”的节点完成后端阵列分区。

4. 为确保所有集群/*/director /*/ports/ node的后端接口I0-02I0-03正常运行,请执行以下命令。具体操作请参见配置后端路径zoning的步骤1

5. 确保系统配置第二阶段应用完成后,所有节点的前端接口“I0-00”“I0-01”均可使用(命令为ll cluster /*/directors/*/ports/)

6. 请根据卷前提条件确保存储卷可用。

7. zoning完成后,进入下一个任务。


 

metro node初始化文档(一)的评论 (共 条)

分享到微博请遵守国家法律