IBM & Red Hat on AWS

Unlock Seamless iSCSI Storage Integration: A Guide to FSxN on ROSA Clusters for iSCSI

In a previous blog, we introduced an exciting feature in the Trident 25.02 release that simplifies preparing worker nodes of an OpenShift Cluster for iSCSI workloads. This enhancement eliminates the need for manual preparation, streamlining the process for Kubernetes cluster worker nodes and benefiting users of Red Hat OpenShift Service on AWS (ROSA). With this feature, provisioning persistent volume for various workloads, including virtual machines on OpenShift virtualization on bare metal nodes within a ROSA cluster, becomes effortless.

In this blog, we will provide a comprehensive guide on installing Amazon FSx for NetApp ONTAP (FSxN) on AWS and utilizing it to provision storage for containers and virtual machines running on ROSA clusters. Join us as we walk you through the installation and configuration of Trident 25.02, showcasing how to create container applications and virtual machines on ROSA clusters using iSCSI volumes. Additionally, we will demonstrate that Trident supports Read Write Many (RWX) access modes for iSCSI volumes in Block mode, enabling live migrations of VMs created with iSCSI storage. Get ready to unlock seamless storage integration and enhance your ROSA cluster deployments!

 ROSA clusters with FSxN storage

ROSA integrates seamlessly with Amazon FSx for NetApp ONTAP (FSxN), a fully managed, scalable shared storage service built on NetApp’s renowned ONTAP file system. With FSxN, customers can leverage key features such as snapshots, FlexClones, cross-region replication with SnapMirror, and a highly available file server that supports seamless failover. The integration with NetApp Trident driver—a dynamic Container Storage Interface (CSI)—facilitates the management of Kubernetes Persistent Volume Claims (PVCs) on storage disks. This driver automates the on-demand provisioning of storage volumes across diverse deployment environments, making it simpler to scale and protect data for your applications. One key benefit of FSxN is that it is a true first party AWS offering just like EBS, meaning customers can retire their committed spend with AWS and get support directly from them as well.

Solution overview

This diagram shows the ROSA cluster deployed in multiple availability zones (AZs). ROSA cluster’s master nodes, infrastructure nodes are in Red Hat’s VPC, while the worker nodes are in a VPC in the customer’s account. We’ll create an FSxN file system within the same VPC and install the Trident provisioner in the ROSA cluster, allowing all the subnets of this VPC to connect to the file system.

ROSA HCP arch

Figure 1: ROSA HCP architecture

Prerequisites

AWS account
A Red Hat account
● IAM user with appropriate permissions to create and access ROSA cluster
AWS CLI
ROSA CLI
OpenShift command-line interface (oc)
● Helm 3 documentation
A HCP ROSA cluster
Access to Red Hat OpenShift web console

Step 1:  Provision FSx for NetApp ONTAP

Create a multi-AZ FSx for NetApp ONTAP in the same VPC as the ROSA cluster. There are several ways to do this.

We are showing the creation of FSxN using a CloudFormation (CFN) Stack

  1. Clone the GitHub repository

# git clone https://github.com/aws-samples/rosa-fsx-netapp-ontap.git

  1. Run the CloudFormation Stack

Run the command below by replacing the parameter values with your own values:

# cd rosa-fsx-netapp-ontap/fsx

aws cloudformation create-stack \
--stack-name ROSA-FSXONTAP \
--template-body file://./FSxONTAP.yaml \
--region <region-name> \
--parameters \
ParameterKey=Subnet1ID,ParameterValue=[subnet1_ID] \
ParameterKey=Subnet2ID,ParameterValue=[subnet2_ID] \
ParameterKey=myVpc,ParameterValue=[VPC_ID] \
ParameterKey=FSxONTAPRouteTable,ParameterValue=[routetable1_ID,routetable2_ID] \
ParameterKey=FileSystemName,ParameterValue=ROSA-myFSxONTAP \
ParameterKey=ThroughputCapacity,ParameterValue=1024 \
ParameterKey=FSxAllowedCIDR,ParameterValue=[your_allowed_CIDR] \
ParameterKey=FsxAdminPassword,ParameterValue=[Define Admin password] \
ParameterKey=SvmAdminPassword,ParameterValue=[Define SVM password] \
--capabilities CAPABILITY_NAMED_IAM

Where :
region-name: same as the region where the ROSA cluster is deployed
subnet1_ID : id of the Preferred subnet for FSxN
subnet2_ID: id of the Standby subnet for FSxN
VPC_ID: id of the VPC where the ROSA cluster is deployed
routetable1_ID, routetable2_ID: ids of the route tables associated with the subnets chosen above
your_allowed_CIDR: allowed CIDR range for the FSx for ONTAP security groups ingress rules to control access. You can use 0.0.0.0/0 or any appropriate CIDR to allow all traffic to access the specific ports of FSx for ONTAP.

Define Admin password: A password to login to FSxN
Define SVM password: A password to login to SVM that will be created.

Verify that your file system and storage virtual machine (SVM) have been created using the Amazon FSx console.

 

3. Install Trident CSI driver for the ROSA cluster

a. Install Trident using the Trident certified Operator in the operator hub. For additional methods of installing Trident, refer to the Trident documentation. Ensure that all Trident pods are running after the installation is successful.

Install Trident operator from operator hub

Figure 2: install from operator hub

Install the operator in the trident namespace. Once the operator is installed, click on view operator.

Now install the trident orchestrator by clicking on create instance.

Create instance using the operator

Figure 3: Operator install

Go to the yaml view and update the nodePrep parameter to include iscsi.

Create Trident operator instance

Figure 4: Create Trident operator instance

Once the orchestrator status changes to installed, ensure that all trident pods are in the running state.

Trident operator details

Figure 5: Trident operator details

Now you can log back into the ROSA worker nodes and verify that iscsid and multipathd are running and that the multipath.conf file has the required entries.

[root@localhost fsx]# oc debug node/ip-10-0-0-196.us-west-2.compute.internal

Starting pod/ip-10-0-0-196us-west-2computeinternal-debug-tv7vw …
To use host binaries, run `chroot /host`

Pod IP: 10.0.0.196
If you don’t see a command prompt, try pressing enter.

sh-5.1# sh-5.1#
sh-5.1# chroot /host
sh-5.1#
sh-5.1# systemctl status iscsid
● iscsid.service – Open-iSCSI
Loaded: loaded (/usr/lib/systemd/system/iscsid.service; enabled; preset: disabled)
Active: active (running) since Wed 2025-08-06 20:31:37 UTC; 5min ago
TriggeredBy: ● iscsid.socket
Docs: man:iscsid(8)
man:iscsiuio(8)
man:iscsiadm(8)
Main PID: 621624 (iscsid)
Status: “Ready to process requests”
Tasks: 1 (limit: 99844)
Memory: 4.1M
CPU: 5ms
CGroup: /system.slice/iscsid.service
└─621624 /usr/sbin/iscsid -f

Aug 06 20:31:37 ip-10-0-0-196 systemd[1]: Starting Open-iSCSI…
Aug 06 20:31:37 ip-10-0-0-196 systemd[1]: Started Open-iSCSI.
sh-5.1# systemctl status multipathd
● multipathd.service – Device-Mapper Multipath Device Controller
Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled; preset: enabled)
Active: active (running) since Wed 2025-08-06 20:31:38 UTC; 5min ago
TriggeredBy: ○ multipathd.socket
Process: 621707 ExecStartPre=/sbin/modprobe -a scsi_dh_alua scsi_dh_emc scsi_dh_rdac dm-multipath (code=exited, stat>
Process: 621708 ExecStartPre=/sbin/multipath -A (code=exited, status=0/SUCCESS)
Main PID: 621709 (multipathd)
Status: “up”
Tasks: 7
Memory: 18.9M
CPU: 37ms
CGroup: /system.slice/multipathd.service
└─621709 /sbin/multipathd -d -s

Aug 06 20:31:38 ip-10-0-0-196 systemd[1]: Starting Device-Mapper Multipath Device Controller…
Aug 06 20:31:38 ip-10-0-0-196 multipathd[621709]: ——–start up——–
Aug 06 20:31:38 ip-10-0-0-196 multipathd[621709]: read /etc/multipath.conf
Aug 06 20:31:38 ip-10-0-0-196 multipathd[621709]: path checkers start up
Aug 06 20:31:38 ip-10-0-0-196 systemd[1]: Started Device-Mapper Multipath Device Controller.
sh-5.1#
sh-5.1#
sh-5.1# cat /etc/multipath.conf

defaults { find_multipaths no } blacklist { device { vendor .* product .* } } blacklist_exceptions { device { vendor NETAPP product LUN } } 

sh-5.1#

  1. Configure the Trident CSI backend to use FSx for NetApp ONTAP (ONTAP SAN for iSCSI)

The Trident back-end configuration tells Trident how to communicate with the storage system (in this case, FSxN). For creating the backend, we will provide the credentials of the Storage Virtual machine to connect to, along with the cluster management lif and the SVM to use for storage provisioning. We will use the ontap-san driver to provision storage volumes in FSxN file system.

  1. Create the backend object

Create the backend object using the command shown and the following yaml.

#cat tbc-fsx-san.yaml

<code class="lang-yaml">apiVersion:</code><code class="lang-yaml"> v1
kind:</code><code class="lang-yaml"> Secret
metadata:</code><code class="lang-yaml">
</code><code class="lang-yaml">  name: tbc-fsx-san-secret
type: Opaque
stringData:
  username: fsxadmin
  password: &lt;value provided for Define SVM password as a parameter to the Cloud Formation Stack&gt;
---
apiVersion: trident.netapp.io/v1
kind: TridentBackendConfig
metadata:
  name: tbc-fsx-san
spec:
  version: 1
  storageDriverName: ontap-san
  managementLIF: &lt;management lif of the file system in AWS&gt;
  backendName: tbc-fsx-san
  svm: &lt;SVM name that is created in the file system&gt;
  defaults:
    storagePrefix: demo
    nameTemplate: "{{ .config.StoragePrefix }}_{{ .volume.Namespace }}_{{ .volume.RequestName }}"
  credentials:
    name: tbc-fsx-san-secret</code>


# oc apply -f tbc-fsx-san.yaml -n trident

2. Verify  backend object has been created and Phase is showing Bound and Status is Success

[ec2-user@ip-10-0-128-119 ~]$ oc create -f tbc-fsx-san.yaml -n trident

secret/tbc-fsx-san-secret created

tridentbackendconfig.trident.netapp.io/tbc-fsx-san created

[ec2-user@ip-10-0-128-119 ~]$

[ec2-user@ip-10-0-128-119 ~]$

[ec2-user@ip-10-0-128-119 ~]$ oc get tbc -n trident

NAME          BACKEND NAME   BACKEND UUID                           PHASE   STATUS

tbc-fsx-san   tbc-fsx-san    10d013ab-b291-4a0c-91fa-6c76eddf554e   Bound   Success

[ec2-user@ip-10-0-128-119 ~]$

3. Create Storage Class for iSCSI

Now that the Trident backend is configured, you can create a Kubernetes storage class to use the backend. Storage class is a resource object made available to the cluster. It describes and classifies the type of storage that you can request for an application.

# cat sc-fsx-san.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: sc-fsx-san provisioner: csi.trident.netapp.io parameters: backendType: "ontap-san" media: "ssd" provisioningType: "thin" fsType: ext4 snapshots: "true" storagePools: "tbc-fsx-san:.*" allowVolumeExpansion: true #oc create -f sc-fsx-san.yaml 

4. Verify storage class is created

[ec2-user@ip-10-0-128-119 ~]$ oc create -f sc-fsx-san.yaml
storageclass.storage.k8s.io/sc-fsx-san created
[ec2-user@ip-10-0-128-119 ~]$


[ec2-user@ip-10-0-128-119 ~]$ oc get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 21h
gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 21h
sc-fsx-san csi.trident.netapp.io Delete Immediate true 13s
[ec2-user@ip-10-0-128-119 ~]$

5. Create a Snapshot class in Trident so that CSI snapshots can be taken

# cat snapshotclass.yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: trident-snapshotclass driver: csi.trident.netapp.io deletionPolicy: Retain # oc create -f snapshotclass.yaml 

[ec2-user@ip-10-0-128-119 ~]$
[ec2-user@ip-10-0-128-119 ~]$ oc create -f snapshotclass.yaml
volumesnapshotclass.snapshot.storage.k8s.io/trident-snapshotclass created
[ec2-user@ip-10-0-128-119 ~]$
[ec2-user@ip-10-0-128-119 ~]$ oc get volumeSnapshotClass
NAME DRIVER DELETIONPOLICY AGE
csi-aws-vsc ebs.csi.aws.com Delete 22h
trident-snapshotclass csi.trident.netapp.io Retain 18s
[ec2-user@ip-10-0-128-119 ~]$

This completes the installation of Trident CSI driver and its connectivity to FSxN file system using iSCSI.

Using ISCSI storage for container apps on ROSA

  1. Deploying a Postgresql application using iSCSI storage class
  2. Use the following yaml file to deploy postgresql app
#cat postgres-san.yaml apiVersion: apps/v1 kind: Deployment metadata: name: postgres spec: replicas: 1 selector: matchLabels: app: postgres template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:14 env: - name: POSTGRES_USER #value: "myuser" value: "admin" - name: POSTGRES_PASSWORD #value: "mypassword" value: "adminpass" - name: POSTGRES_DB value: "mydb" - name: PGDATA value: "/var/lib/postgresql/data/pgdata" ports: - containerPort: 5432 volumeMounts: - name: postgres-storage mountPath: /var/lib/postgresql/data volumes: - name: postgres-storage persistentVolumeClaim: claimName: postgres-pvc --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: postgres-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: sc-fsx-san --- apiVersion: v1 kind: Service metadata: name: postgres spec: selector: app: postgres ports: - protocol: TCP port: 5432 targetPort: 5432 type: ClusterIP #oc create namespace postgres-san
root@localhost HAFSX]# oc create -f postgres-san.yaml -n postgres-san
deployment.apps/postgres created
persistentvolumeclaim/postgres-pvc created
service/postgres created

b.  Verify that the application pod is running.

Verify and a PVC and PV are created for the application. Note that the storage class for the PVC uses the san storage class previously created using iSCSI.

c. Verify that iSCSi sessions are created in the node where the pod runs.

[ec2-user@ip-10-0-128-119 ~]$ oc debug node/ip-10-0-1-192.us-west-2.compute.internal
Starting pod/ip-10-0-1-192us-west-2computeinternal-debug-4j8gg …
To use host binaries, run `chroot /host`
Pod IP: 10.0.1.192
If you don’t see a command prompt, try pressing enter.
sh-5.1# chroot /host
sh-5.1# iscsiadm -m session
tcp: [1] 10.0.0.107:3260,1028 iqn.1992-08.com.netapp:sn.5cdf7ad172f811f0883cc908a40ebab0:vs.3 (non-flash)
tcp: [2] 10.0.2.18:3260,1029 iqn.1992-08.com.netapp:sn.5cdf7ad172f811f0883cc908a40ebab0:vs.3 (non-flash)
sh-5.1#

d. Verify that a lun is created

Verify that a lun is created on the volume in FSxN  for this application and the lun is mapped. You can log into the FSxN CLI using fsxadmin and the password you previously created.

FsxId0d3560ad60fbad076::> volume show -vserver fsx
Vserver Volume Aggregate State Type Size Available Used%
——— ———— ———— ———- —- ———- ———- —–
fsx fsx_root aggr1 online RW 1GB 972.2MB 0%
fsx trident_postgres_san_postgres_pvc_a56f2
aggr1 online RW 5.50GB 5.45GB 0%
fsx vol1 aggr1 online RW 1TB 860.2GB 0%
3 entries were displayed.

FsxId0d3560ad60fbad076::> lun show -volume trident_postgres_san_postgres_pvc_a56f2 -vserver fsx
Vserver Path State Mapped Type Size
——— ——————————- ——- ——– ——– ——–
fsx /vol/trident_postgres_san_postgres_pvc_a56f2/lun0
online mapped linux 5GB

FsxId0d3560ad60fbad076::> igroup show -vserver fsx
Vserver Igroup Protocol OS Type Initiators
——— ———— ——– ——– ————————————
fsx ip-10-0-1-192.us-west-2.compute.internal-239f6c7b-d5da-4c33-94a5-65d13c0caaeb
iscsi linux iqn.1994-05.com.redhat:cae290244627

Using ISCSI storage for VMs on OpenShift Virtualization in ROSA

1. Verify you have baremetal nodes as worker nodes in the cluster.

To be able to create VMs, you need to have bare metal nodes on the ROSA cluster.

2. Install OpenShift Virtualization using the Operator

You can install OpenShift Virtualization using the OpenShift Virtualization Operator in the Operator hub. Once it is installed and configured, Virtualization will be populated in the UI of the OpenShift Console.

Install OpenShift Virtualization

Install OpenShift Virtualization

3. Deploy a VM using iSCSI storage class

Click on Create VirtualMachine and select From template.

Select Fedora VM. You can choose any OS that has source available.

4. Customize the VM

Customize the VM to provide the storage class for the boot disk and create additional disks with the selected storage class.

Click on Customize VirtualMachine.

Customize VM

Customize VM

5. Click on the Disks tab and click on Edit for the root disk

Edit root disk

Edit root disk

6. Ensure you have selected sc-fsx-san for storage class.

Select Shared Access (RWX) for Access mode and Select Block for Volume Mode. Trident Supports RWX Access mode for iSCSI storage in Volume Mode Block. This setting is a requirement for the PVC of the disks so that you can perform live migration of VMs. Live migration is migration of a VM from one worker node to another for which RWX access mode is required and Trident supports it for iSCSI in Block Volume Mode.

Disk storage class selection

Disk storage class selection

Note:

If sc-fsx-san was set as the default storage in the cluster, then this storage class will automatically be picked.

8. Add another disk

Click Add disk and select empty disk (since this is just an example) and ensure that sc-fsx-san disk is chosen, Volume Mode Block and Access Mode RWX are chosen.

Click Save and the Click Create VirtualMachine. The VM comes to a running state.

View running VMs

View running VMs

9. Check the VM pods, PVCs. Verify that PVCs are created using iSCSI storage class and RWX Access modes.

[root@localhost fsx]# oc get pods,pvc
NAME READY STATUS RESTARTS AGE
pod/virt-launcher-fedora-demo-vm-lr2rj 1/1 Running 0 111s

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/dv-fedora-demo-vm-disk-green-impala-85-cls8e4 Bound pvc-cfceae37-d159-46d9-8925-4e3bc35ebdc6 30Gi RWX sc-fsx-san <unset> 2m22s
persistentvolumeclaim/dv-fedora-demo-vm-rootdisk-2uz76f Bound pvc-98fb0938-7530-4ec8-bf36-78e7101468f5 30Gi RWX sc-fsx-san <unset> 2m22s
[root@localhost fsx]#

10. Verify that a LUN is created in each volume corresponding to the disk PVCs by logging into the FSxN CLI.

FsxId0d3560ad60fbad076::> volume show -vserver fsx
Vserver Volume Aggregate State Type Size Available Used%
——— ———— ———— ———- —- ———- ———- —–
fsx fsx_root aggr1 online RW 1GB 971.0MB 0%
fsx trident_default_prime_cc24a6a2_86ca_4e0b_a467_e9fac9e391a9_cfcea
aggr1 online RW 33GB 33.00GB 0%
fsx trident_openshift_virtualization_os_images_prime_2a2f581f_87a3_4e7a_8d3a_3a3d54253796_8a3b8
aggr1 online RW 33GB 31.59GB 4%
fsx trident_openshift_virtualization_os_images_prime_54ef0fcf_f60e_4561_b8e2_571d1cdebc06_eaf2e
aggr1 online RW 33GB 31.33GB 5%
fsx trident_openshift_virtualization_os_images_prime_7e5eb814_68a6_4836_8321_b2d14d838ffc_91f5f
aggr1 online RW 33GB 30.86GB 6%
fsx trident_openshift_virtualization_os_images_prime_9cf9f23d_1ed8_4100_a05e_c73c938a2d78_bebc7
aggr1 online RW 33GB 30.68GB 7%
fsx trident_openshift_virtualization_os_images_prime_a4d7925b_d11b_40d6_a052_1c9932e9c0c6_2e76b
aggr1 online RW 33GB 31.07GB 5%
fsx trident_openshift_virtualization_os_images_prime_e77ccf75_7798_43cf_85d6_f9f69f9eb200_5c85c
aggr1 online RW 33GB 32.36GB 1%
fsx trident_openshift_virtualization_os_images_tmp_pvc_50094ec5_587a_4de2_86a4_7f200f8e89b8_98fb0
aggr1 online RW 33GB 32.33GB 2%
fsx trident_postgres_san_postgres_pvc_a56f2
aggr1 online RW 5.50GB 5.39GB 2%
fsx vol1 aggr1 online RW 1TB 849.1GB 0%
11 entries were displayed.

FsxId0d3560ad60fbad076::> lun show -volume trident_default_prime_cc24a6a2_86ca_4e0b_a467_e9fac9e391a9_cfcea
Vserver Path State Mapped Type Size
——— ——————————- ——- ——– ——– ——–
fsx /vol/trident_default_prime_cc24a6a2_86ca_4e0b_a467_e9fac9e391a9_cfcea/lun0
online mapped linux 30GB

FsxId0d3560ad60fbad076::> lun show -volume trident_openshift_virtualization_os_images_tmp_pvc_50094ec5_587a_4de2_86a4_7f200f8e89b8_98fb0
Vserver Path State Mapped Type Size
——— ——————————- ——- ——– ——– ——–
fsx /vol/trident_openshift_virtualization_os_images_tmp_pvc_50094ec5_587a_4de2_86a4_7f200f8e89b8_98fb0/lun0
online mapped linux 30GB

FsxId0d3560ad60fbad076::>

FsxId0d3560ad60fbad076::> igroup show
Vserver Igroup Protocol OS Type Initiators
——— ———— ——– ——– ————————————
fsx ip-10-0-1-192.us-west-2.compute.internal-239f6c7b-d5da-4c33-94a5-65d13c0caaeb
iscsi linux iqn.1994-05.com.redhat:cae290244627
fsx ip-10-0-1-227.us-west-2.compute.internal-239f6c7b-d5da-4c33-94a5-65d13c0caaeb
iscsi linux iqn.1994-05.com.redhat:45b335a3f4b
2 entries were displayed.

Conclusion

In this blog, we successfully demonstrated how to integrate FSx for NetApp ONTAP as a shared file system with a ROSA cluster using a Hosted Control Plane, leveraging the NetApp Trident CSI driver for iSCSI storage. We illustrated how Trident release 25.02 streamlines the preparation of worker nodes by configuring iSCSI and multipathing for iSCSI on ONTAP storage. Our comprehensive, step-by-step guide detailed the configuration of the Trident backend and storage class for iSCSI, and how to utilize them to create containers and VMs. We emphasized that the ontap-san driver supports the RWX access mode in Block Volume Mode for iSCSI, making it ideal for VM disks in OpenShift virtualization and enabling live migration of VMs.

For further information on Trident, please refer to the NetApp Trident documentation. Additionally, you can find more resources, including detailed guides and videos, in the Red Hat OpenShift with NetApp section under Containers in the NetApp Solutions documentation. To clean up the setup from this post, follow the instructions provided in the GitHub repository.

Ryan Niksch

Ryan Niksch

Ryan Niksch is a Partner Solutions Architect focusing on application platforms, hybrid application solutions, and modernization. Ryan has worn many hats in his life and has a passion for tinkering and a desire to leave everything he touches a little better than when he found it.

Banumathy Sundhar

Banumathy Sundhar

In my current role as a Technical Marketing Engineer and in my past role as a Technology Enablement Professional, I have carried out my responsibilities in various ways. I evangelize platforms and products, deep dive into technical areas (Kubernetes, OpenShift, AWS, Azure and Google Clouds), provide live or recorded demos, share information, educate and up-skill via my blogs, live and virtual multi-day sessions. I have provided technical validation of solutions for our customers with NetApp products integrated with OpenShift clusters in a hybrid cloud environment. In my previous role, I have developed and delivered a wide variety of training on technical topics with hands-on lab-intensive content, technical presentations, certification prep sessions and lab sessions at Conferences and Instructor Summits.

Mayur Shetty

Mayur Shetty

Mayur Shetty is a Senior Solution Architect within Red Hat’s Global Partners and Alliances organization. He has been with Red Hat for four years, where he was also part of the OpenStack Tiger Team. He previously worked as a Senior Solutions Architect at Seagate Technology driving solutions with OpenStack Swift, Ceph, and other Object Storage software. Mayur also led ISV Engineering at IBM creating solutions around Oracle database, and IBM Systems and Storage. He has been in the industry for almost 20 years, and has worked on Sun Cluster software, and the ISV engineering teams at Sun Microsystems.