The following is a recording and full transcript from the webinar, “Get your On-Premises NAS in the Azure Cloud”. You can download the full slide deck on Slideshare


Full Transcript: Get your On-Premises NAS in the Azure Cloud

David Mitchell:             Okay, folks, we’re just on the hour now so let’s get started. I want to click on record. Okay, it’s done. First of all, welcome to today’s webinar. Today we’re going to be talking about getting your on-premise NAS in Azure Cloud. Today’s presenter is going to be Matt Blanchard, a solutions architect with us here in SoftNAS.

My name is David Mitchell. Before I hand you over to Matt, I just have a couple of slides to cover. As I mentioned, Matt is our presenter today and I’ll hand you over to him shortly.

It looks like everyone has safely got into GoToWebinar. Hopefully, you can see and you can hear us. If it’s your first time using GoToWebinar, you do have a couple of options for audio. You can either use the mic and speakers or the telephone.

If you’re using the telephone, we do have a direct dial-in for most countries so make sure you do that and enter in your audio pin. If not, use your mic and speakers. You may need just to configure that if you have a couple of different options there in your local device.

Throughout the session today, we are going to have everyone on mute so the best way to handle a question, we found, is to use the questions pane. As Matt goes through the slides and the demo, if you have any questions please post them there.

We have allocated some time at the very end to go over the questions. I’m sure Matt will remind you as he goes through the webinar.

Lastly, as I mentioned and as you probably heard me saying about recording, we are recording this session. If you do need to leave or if a colleague couldn’t make it or if you know of someone else who’s interested and maybe couldn’t make it, we will be sending out a link to the recording after this and also a link to the slides.

We post our slides on SlideShare so don’t worry about writing down notes or anything like that should get access to all the material. That’s it. I am going to now hand you over to Matt.

Matt, if you want to unmute your line. I’m going to make you presenter. I can see your slide, matt, but I can’t hear you.

Matt Blanchard:             Can you hear me now?

David:             Yeah, loud and clear.

Matt:             Great! Do you see the slides?

David:             I do. You want to put it into presentation mode so we make sure we can.
Matt:             I thought we were in presentation mode there. How’s that?

David:             No, I can just see them in regular view.

Matt:             We’ll try this one. How’s that?

David:             No, it’s still the same for me, Matt.

Matt:             I am sorry. Are you seeing the car on the background?

David:             No, I’m just seeing a picture of the slides.

Matt:             Let me do this.

David:             I guess everyone is seeing on the webinar. I don’t know if you want to put a comment in the questions pane there if everyone is seeing the same thing.

Matt:             How about that? Now, do you see just this?

David:             Yeah, that’s perfect now. That’s it.

Matt:             We will go from there. I’m sorry.

David:             It’s perfect. Over to you, Matt.

Matt:             I’m sorry about that David. Starting off once again, my name is Matt Blanchard. I am a principal solutions architect here at SoftNAS. Today, we’re going to talk about some of the advantages of using Microsoft Azure for your cloud storage devices inside the cloud and helping you make plans to move from your on-premise solution today into the cloud of tomorrow.

This is not a new concept. This is what we’ve seen in trend for the last several years. The bill versus buy aspect where we’re going to have a great economy of scale whenever we buy assets or we buy an OpEx partner and we are able to use that type of partnership to advance our IT needs versus a low economy of scale. If I have to invest my own money to build up the information systems and buy large SAN suppliers in networking, storage networks, and so forth.

Hosting that and building that all out myself makes a lot of capital investiture. This is the paradigm — it’s on-premise versus the cloud architecture. A lot of the things that we see that we have to provide for ourselves on-premise are things that are assumed and given to us in configurations in the cloud, such as with Microsoft Azure giving us the availability to have full-fledged VMs running inside of our Azure repository and accessing our SoftNAS virtual SANs. We are able to give you a network access control towards all your storage needs within a packaged small useable space.

What does this afford us? I don’t have to build my own data center. I can have all my applications running in the cloud on-services versus having them on-premise running physically and having to maintain them physically on datasets.

If you think about rebuilding applications for the next generation of databases or having the next generation of server componentry that we’re going to install that may not have the correct driver sets for our applications and having to rebuild all those things. It makes it quite tedious to help move forward your architecture.

However, when we start to blow those lines and move into let’s say a hosting provider or a cloud services, those dependencies on the actual hardware devices and the physical device drivers start to fade away because we’re running these applications as services and not as physical supported sideload architectures.

This movement towards Azure in the cloud, it makes quite a bit of sense whenever you start looking at the economies of scale, how fast we could grow in capacity, and things like bursting control whenever we have large amounts of data services that we’re going to have to supply on-demand versus things that we have on a constant day-to-day basis.

Say we are a big software company or a big game company that’s releasing the next new Star Wars game. I’ll have to TM that or something in my conversation. You’ll have to see us. It might be some sort of online game that needs extra capacity for the first weekend out just to support all the new users who’re going to be accessing that.

This burst ability and this expandability into the cloud make all the sense in the world because who wants to spend that money on that hardware to build out that infrastructure for something that may or may not continue to be that large of an investment in the future. If we can scale that down overtime or scale it up over time, either way. Maybe we undersized our built. You can think of it in that aspect.

It really makes sense – this paradigm switch into the cloud mantra.

At SoftNAS, we’ve built our architecture to be flexible and adaptable inside of this cloud architecture. We’ve built a Linux virtual machine; it’s built on CentOS. It runs ZFS as our file system on that kernel.

We run all of our systems on open controllable systems. We have staff on-site that contribute into these open-source amalgams to make these systems better into CentOS and ZFS. We contribute a lot of intellectual property to help advance these technologies into the future.

We, of course, run HTML5 as our admin UI, we have PHP, and Apace is our web server. We have all these open systems to allow us to be able to take advantage of a great open-source community out there on the internet.

We integrate with multiple different service users. If you have customers that are currently running in AWS or CenturyLink Cloud and they are looking to migrate into Azure — make a change — it’s very easy for us to come in and help you make that data migration change because inserting a SoftNAS Cloud service into both of those service providers and then simply migrating that data is a very simple and easy to do task.

We are actually going to cover that in our demonstration here in just a short few slides. I promised David I would not slide you all to death today. We’re going to go through a few of these slides further, then we’re going to get into a demonstration, then we’ll touch on a few that we’re going to end up with, and then we’ll do a quick Q&A.

As I said, we really do take in responses. We want to be flexible. We want to be open. We want to have all of our data resources that have multiple use-cases. We are able a full-featured NAS service that does all of these things in the data services tab.

Block replication, we can do inline deduplication, caching, storage pools, thin provisioning, writable snapshots, and snapclones. We can do compression, encryption. All of these different offerings, we are able to give you in a single packaged NAS solution.

Once again, all the things that you think you’ll come back in like, “I’m going to have to implement all of that stuff. I’m going to have to buy all these different componentry and insert them into my hardware,” those are things that are assumed and used and we are able to go ahead and give you directly in our NAS solution.

How does SoftNAS work? To be very forthcoming, it’s basically a gateway technology. We are able to present storage capacity whether it be a CIFS or SMB access medium for Windows user for some sort of Windows file share or if it’s an NFS share for some Linux machines or even just an iSCSI block device or an Apple File Protocol for entire machine backups.

If you have end-users or end-devices that need storage repositories of multiple different protocols, we are able then to store that data into say an Azure Blob Storage or even a native Azure storage device.

We are able then to translate those protocols into an object protocol, which is not a native language. We don’t speak in object whenever we’re going through a normal SMB connection, but we do also speak native object directly into Azure Blob. We offer the best of both worlds with this solution.

Just the same as native block devices, we have a native block protocol that we are to talk directly into Azure disks that directly attach to these machines. We are able to create flexible containers that make data unifiable accessible.

How does this play out and work in the real world? What we’re basically going to do is we’re going to present a single IP point of access that all of these file systems will land on. All of our CIFS access, all of our NFS exports, all of the AFP shares will all be enumerated out on a single SoftNAS instance and they will be presented to these applications, servers, and end-users.

The storage pools are nothing more than conglomerations of disks that have been offered up by the Microsoft Azure platform. Whether it’s Microsoft Blob or it’s just native disks, if it’s even another type of object device that you’ve imported into these drives, we can support all of those device types and create storage pools of different technologies.

And we can attach volumes and LUNs that have shares of different protocols to those storage pools so it allows us to have multiple different connection points to different storage technologies on the backend.

And we do this as a basic translation and it’s all seamless to the end-user or the end device.

We’re going to go really quick into a demonstration of this. If you don’t mind, just stick with me here. David, please interrupt me if my screen does not show up correctly here. I should be showing my screen now that has my Azure portal on it.

What we’re going to do right now is we’re going show you how easy it is to deploy a SoftNAS virtual machine into my Azure portal. I’ve got both the virtual portal up here as well as Microsoft Azure…
My Azure portal has timed out on me so I’ll just come back to this one here. I’m going to show you how to deploy this VM within the gallery. It’s very simple. All we have to do is come down and click on new.

I’m going to select compute in virtual machine. Once I select that, I’m going to select gallery and it’s going to bring up a selector. I could simply come in here and insert SoftNAS. Once I type soft, it’s actually going to appear here and I can select my instance that I would like to provision.

I’m not going to go all the way through this provisioning system, but you can kind of get the gist. It’s for the interest of how and not build up some machines for us.

We’re going to go through and call this MS Blob demo. You would then select your different platforms. If we’re going to have an A2. I think D4 is one of our standard offerings. We can build out these machines in a multitude of different ways according to your data needs.

If you’re going to be doing quite a bit of caching for read/cache, we might want to increase the RAM size because ZFS is very heavy on RAM for caching. We might come in here and add more memory, 28 gigs of memory.

We can come in then and create a user. Let’s call it SoftNAS and give it a password. Create a password. I’m sorry I’m not great at talking while I’m typing. Then we just continue forward.

After we select our password, we can come in and create a new cloud service or select a cloud service that we’ve already created before. Then we’ll come in and add some DNS names for this.

We can come in and add some different information for our network in our subnet if we wanted to select a different network. The last piece that we would need to use is set up SSH access as well as HTTPS. Where is HTTPS? There it is.

Once that is created, we are ready to go. We would be able to come in here and click next, next, next, and it would create this instance. I’m going to go ahead and kill this and show you all what we are going to be presented with.

You are going to be presented with a machine that looks something like this. After this machine has built up and everything is lined out correctly, you’re going to have a SoftNAS machine that you’re going to log into and be presented with this UI.

Now how do I add disk repositories to this? How do I add resources? If I want to add a native Microsoft disk to this or an Azure disk, I can come back into my Azure portal and simply select my system that I would like to add it. I am going to come in and I’m going to click on the dashboard.

Then down here at the bottom…Ops! I’m on the wrong one. I think I need to be on the SoftNAS one here. Yes, this is the one I need to be on.

Down at the bottom here, you’ll see attach and I can attach a new disk. Create a disk, I’ll call this one 10 gigs and attach. It will go through the attachment process of this disk.

Once it finishes, the disk will be available for use. We could move forward with adding a protocol as we chose. In this instance, I’m going to go ahead and show you all how to add a blob device as well.

A brand new option that we’ve just released is adding blob devices for use inside of our SoftNAS storage system. I’m back into my SoftNAS virtual machine – it’s running on my Azure system.

I’m going to come in and I’m going to add a device. I’m going to select Azure Blob. After I select Azure Blob, you’ll notice that I’m given my user name. I can put in MBlanchard is my user name.

I could come in and my access key. I’m not going into the rigamor of typing my access key out or copying and pasting it. I’m sorry. I don’t want to show that off to the whole world.

I’ll add a container base name here. We would want to customize this so I’m going to call this Matt Blob or something like that. You’ll notice that once I select off of that area, the Matt Blob container base name pops itself down here into the container name.

And that’s basically just coming in and creating a custom container. All containers in the world will have to be named something unique so we go ahead and throw in some unique characters here at the end of your base name to make sure that it’s completely randomized and unique.

We can select our disk size as we would with any maximum disk size. This is thin provisioned by default but we’re going to have to set maximum sealing limit. Then we could select if we’d like to encrypt this disk as well and give it a password to encrypt that upon.

Once again I would have to add my access key in here to create the blob devices. I’m not going to go through that rigamor of doing that. For the interest of time, I have gone ahead and gone forward and added in some blob devices and gotten us ready for the rest of the demonstration.

The rest of this demonstration is going to be going through and configuring two SoftNAS machines to talk with a synchronized ZFS replication running between them. Right now, what I have set up is two different machines.

You see they are pretty much identical, both machines have disk drives that have already been provisioned, and I have already provisioned these devices for use on a pool on my second machine.

On my second machine, I’ve already configured this pool but I have not added any protocols – basically no files or data on to these pools. I have created this storage pool for interest of time so I don’t have to come and create this twice.

I’m going to replicate this data on my primary instance. This primary instance could be in a datacenter that I am going to be using as my primary datacenter and my primary means of access.

Once that primary datacenter is up and running, which would be this machine, we were going to have storage repositories and protocols attached to this machine and all the data will be asynchronously replicated across the wire to our secondary machine.

This happens on a schedule about every one minute and it’s a ZFS sync replication that goes on. And after the copy happens, it will happen another one minute afterward, and one minute afterward, and so forth.

The two things that have to be configured upon this replication is the name of the pool and the size of the pool. Both those variables need to be the same in order for replication to happen.

Let’s go ahead and set up a pool that is equal to the pool that we’ve set up on our secondary machine. Our secondary machine, I have called Microsoft Blob and it has 10 gig disks. If you look at our details here, you’ll see that it has two disks that are hosted on this SoftNAS instance from Azure.

Let’s go ahead and do that on my primary machine. I come in here and click “create” on the pool creation wizard. I will name it Microsoft Blob just like my other one.

You’ll see that I have several different RAID options to use. I can use JBOD Array. I can use RAID 0 for striping. I can use RAID 10 for mirror and stripes. Five, six, and seven parity. I can do single parity, duo parity, and triple parity.

Just for demonstration’s sake, I used RAID 0 to give me the maximum speed possible across these two disks. I can select the two disks I would love to use. At the bottom, you’ll see a couple of different options.

I can force creation which basically says, “Hey, if there was a pool already created on these disks, overwrite it.” If you do have a pool on this disk and you’re trying to create another pool on top of it, we’re going to warn you because ZFS is very resilient and it can recover from a lot of errors.

If you do happen to have an issue where you disconnect a disk and it had a pool on it and now you reconnect it, we don’t want you to lose that data. It’s going to flag it and say, “Hey don’t use this disk. It already has a pool on it.”
Lex Encryption. That’s Linux Encryption System. We are able then to supply the password and a repetitive password to enable AES Encryption System. The last one is sync mode which is a write checksum. It’s making sure the writes are landing on the disk correctly.

We have three options; standard, which does its best case to check the write on every write. If not, it comes back for it and checks it later. Always, it reserves CPU time to check every write. And disable which we don’t ever [inaudible 26:02] people using. That never checks that write. It just goes on forward and goes along its business. It is the fastest mechanism, but it is also the most careless and worrisome.

I’m going to go ahead and create this pool. Now we will have a pool of equal size and equal name to my carrier pool on my secondary instance.

There are a couple of other options I can do on this tab if I did come in later and I need to extend this for more data volume. I could come in and click “expand –add any disks to this array” and it’s going to add those disks along and make that storage larger.

I can import any ZFS pools that have been brought in orphaned and this in case of a disaster recovery area, we can bring in those disks and attach them directly and import the pools.

We can add a read/cache. If we have high-speed local disks, that would be great usage for read cache to allow us to have a certain percentage size space for read caching.

By default, the ZFS takes half the system RAM for read hot caching and this is going to be layer two hot cache. We automatically have that much resources for caching. However, this is just layering back on top of that to give us even more caching.

The last piece here is write logging. This is ZIL, which is ZFS Intent Log. This is giving us write security for some writes that are under 32K. Anything that we’re writing on disk is going to be enumerated on the ZIL and we will be able to use that to reset where those writes had landed in a previous time.

We can also add a hot spare device in here if we care to, but I’m not going to go into those any further. The next piece after we’ve created our storage pool is we need to create our writable protocols or our volumes or shares.

Let’s go over here to our volumes in LUNs tab and let’s create some volumes. We’re going to call this first one just Vol and make it very simple. We will attach it to Microsoft Blob.

We can say let’s just do CIFS and AFP for this tab. We will thin provision this. Notice we can choose to thick or thin provision. We can choose if we’d like to use compression or deduplication.

A bit of “warning,” compression uses a little bit more CPU space. A little bit CPU time for compression and depublication is intensive on RAM so we advise you to bump your RAM up about I gig per terabyte of deduped data.

This is inline dedupe, ZFS’s inline file system, so everything is inline when it’s deduped so it’s on the fly and ready to go. Once again, we can set our sync mode directly on the volume versus directly on the pools. Either way, you can set it on volumes or pools.

Also, notice we have a snapshots tab. This allows us to select which type of snapshotting we’d like. If we’d like to have a default schedule which is about every three hours or so; 24/7, which is every single hour for every 24 hours. You can come in and edit that schedule or create schedules as you would like.

We also have a retention policy here and sets that retention times for each of the types of snapshots. These are ZFS snapshots that are stored on the volume itself. I’m going to go ahead and create a couple of these volumes to be used for our data just to demonstrate that whenever we do our replication that that data is actually replicated across the wire.

I’m going to select Vol 2, and this time we’ll do maybe NFS and CIFS. I’ll create it. Then we’ll create a block device on this last one.

Vol 3, and we’ll call this Microsoft Blob once again. This time I’m going to do an iSCSI line. Automatically notice, whenever we select the iSCSI block device, our thick provisioning button is selected. This is basically because most of the time whenever you do have an iSCSI device, it has a finite LUN size.

I’m going to say 5 gigs. Also, notice that we have a LUN targets tab up here. That means we just need to generate an IQN for any of the devices to hook into. We’ll generate IQN here and that way all of our iSCSI initiators can slot those targets, and click “create.”
Everything now is basically created. We’ve created disk repository shares ready for users to start dropping data in there. If we wanted somebody to come in here and write to Vol 1, we would say “Hey this one’s a CIFS share. Go to wackwackmicrosoft_blob, wack Vol 1, IP address wack and you would access to these volumes.

If you had an NFS share, you could come in and do the same with Vol 1 and Vol 2. All of these exports are all ready to go and ready to be written to.

Notice that we do have access to integrate directly with active directory. It’s a simple active directory wizard that goes through and asks you for your domain name, asks you for your net bios name, and then asks for administrator (a machine user that can add machines into the AD). And this is basically doing some addition into AD.

Once that’s all done in this machine over here, you add it into active directory. You can then assign user-rights and group-rights to all it’s file shares and so forth within Windows.

Now we have everything set up. However, if we look at our secondary machine, we don’t have any data here. If we look at our volumes and LUNs tab, there is no data on this secondary machine.

We want to now have a backup, a replicated copy of data on this second machine. Let’s go ahead and set that up through something that we call SnapReplicate.

I’m going to go ahead and add our replication and we’re going to replicate to these other machines. This is 49.121.150.65. I’m going to give it its password. Let’s see which one is this?

Make sure this is the right password. I’m not sure if it is or not. Wrong password. Let’s try again. Next, and finish. Notice now on the background, work is in progress to set up this replication. Replication is now underway so you can see all the mirrors are going. Mirror complete, mirror on the way. Complete.

Now we’ve basically taken all that data, and if we did have volumes of user-data in here, we would have all that data now and it is now copied to our secondary machine. If I refresh here, I will now see all of my data repositories.

We can demonstrate how our replication works by simply coming in and saying let’s go to a volume here. In volume 1, let’s come down and let’s create a snapshot. Ops, we already have snapshots.

If somebody came to me and said, “Hey I’ve got information on Vol 1. I need to recover that and it’s in an NFS share.” I could say, “Okay, let me go ahead and build you a snapclone. Then you could mount that snapclone and grab that data for yourself.”
We already support Microsoft previous versions. If this was in a CIFS directory that was configured for previous versions, they would be able to do this all on their own.

However, in this instance, this is someone coming from an NFS background saying, “Hey I need access to this machine.” Notice now, I’ve created a snapclone of this information. Then they would be able to come in and mount that data.

Let’s come in over here and make sure that my replication is happening. Oh, I’ve just had a failure on it. Something happened. I’m sorry. I grabbed the wrong snapshot for that.

I would need to have a full snapshot in order to create a snapclone in order for it to be replicated. But basically, that’s the idea. All of our data is going to be copied from one machine directly over to the other machine. Every one minute, we will be doing a replication of that data.

That’s basically all we have for the demonstration. We’re going to jump back over to this slide where I talk about a couple of little use-cases. Then we’ll end up and close, get some questions answered, and finish up here.

Let me bring up the slideware one more time. David, can you see my slideware?

David:             Yeah, we can see that, Matt.

Matt:             Great. A couple of use cases where SoftNAS and Azure really make sense. I’m going to go through these and talk about the challenge. The challenge would be a company needs to quickly SaaS-enable a customer facing application on Azure but the app doesn’t support blob. They also need LDI or LDAP Integration for that application.
What would the solution be? Basically, the solution will be rewriting your application to support blob and AD authentications. That is highly unlikely that it would ever happen.

What else could you do? Instead of rewriting that application to support blob, continue to do business the way you always have. That machine needs access via NFS, fine. We’ll just support that via NFS through SoftNAS.

Drop all that data on a Microsoft Azure backend, store it in blob, and let us do the translation. Very simple access so then we could have access for all of our applications on-premise or in the cloud directly to whatever data resources they need and it could be presented with any protocol that’s listed – via CIFS, NFS, AFP, iSCSI.

The next use case, disaster recovery. This is what we did on the demonstration. The challenge is we have got a company that needs reliable off-site data protection.

Maybe they have a big EMC array at their location that they have several years of support left on. They need to be able to meter the use to it, but they need to be able to have a simple integration solution. What would be the solution?

It would be very easy to spin up a SoftNAS instance on the premise, directly access that EMC array and utilize the data resources for SoftNAS. We can then represent those data repositories to their application servers and end-users on site and replicate all that data using Snapreplicate into Microsoft Azure.

We would have our secondary blob storage in Azure and we’d be replicating all that data that’s on-premise into the cloud.

What’s great about this solution is it becomes a gateway drive when I get to the end of support on that EMC array and I say, “We need to go buy a new array or we need to have support for that array.”
We’ve got this thing running in Azure already, why don’t we just cut the code? It is the exact same thing that’s running in Azure. We could just start directing our application resources to Azure. It’s a great way to get you moving into the cloud and get a migration strategy moving forward.

The last one is hybrid on-premise usage and I alluded to this one earlier about the burst to cloud type of thing. This is a company that has performance sensitive applications that need a local LAN. They need off-site protection or capacity.

The solution basically would be to set up replication to Azure and then have that expand capacity. So basically whenever they run out of space on-premise, we would then be able to burst out into Azure and create more and more virtual machines to access that data.

Maybe it’s a web services account that has a web portal UI or something like that that needs just a web presence. Then we’re able to multiple copies of different web servers that are load balanced all accessing the same data on top of Microsoft Azure through SoftNAS.

All of these use cases are very possible. These are all use cases that I have had customers experience today.

Last, SoftNAS overview where our products land. SoftNAS Cloud is our main web offering. It’s offered on Azure, AWS, V Cloud Air, and CenturyLink Cloud. It is a public cloud NAS so any resources locally that are available on that cloud offering are present on our SoftNAS Cloud, as well as any object offering throughout the world. We can have any object connections throughout the world and access it.

SoftNAS Cloud File Gateway. This is an on-premise NAS. It would be built off of a VMware architecture, so this is basically a SoftNAS VM that has access to your local NAS files as well as local disk storage.

SoftNAS Object Filer. This is directed at somebody who is going to not have local data resources but wants to utilize an object resource either in the cloud or an object device locally. We would be able to give them an object file that has just S3 object access included so they’ll just be able to presently use object data repositories on that installation.

Last is SoftNAS Service providers, which is creating a multi-tenure NAS solution. It has Rest API so you can integrate building and tiering into this solution. It also has iSCSI connections with object storage so we are able to use that type of connection to a multitude of different backend offerings.

Some of our last things are technology partners. We’d like to that all of them – Microsoft, the Amazons, the VMwares. All these guys are out there that help us make our product great. We wouldn’t be here without Microsoft Azure helping us promote our product and go forward with a great solution.

Lastly, here is our brand sheet, people that you know that are today SoftNAS customers and we have many hundreds of customers out there that are not listed here.

Here’s just some of our customers that we work with directly — Netflix, Coca Cola, Nike, Boeing. We have all sorts of customers out there from all different verticals using our product in all different ways.

With that, I’m going to give it back to David. I’m going to take a look at some of the questions. While he finishes that up, I’ll go through some questions and we’ll go back to it.

David:             Okay, Matt. Thanks a lot. Again, just a reminder if you have any questions, please use the questions pane, but I also have a few here that I’ll read out. Just for our next step. I’m not sure. Most of you it’s your first time hearing about SoftNAS and our solution.

If you want to learn more, we do have a free 30-day trial version of SoftNAS cloud on Azure that you can try. If you go to softnas.com/azure, you can download that version there and we can help you out.

If you want to learn a bit more, you can go to our website softnas.com/azure. If you want to contact us, you can go to the contact page there. If you have any follow-up questions for the likes of Matt and the team, you can go and also make sure to follow us on Twitter.

Matt, if you want to jump at it, there’s one question there and I have a few questions here that I can call out.

Matt:             The question is, “For a BDR solution, Cloud File Gateway for the client side with replication to SoftNAS Cloud.” You’ll want to replicate that data from on-premise file gateway up into the SoftNAS Cloud on Microsoft Azure. That’s correct.

David:             Another question here, Matt. What version of NFS is supported?
Matt:             We support both version three and four for NFS. The follow-up that probably will be the question that will be asked is what versions of SMB do we support? We support 2 and 3 SMB.
David:             What’s the max latency SoftNAS will support for site replication?

Matt:             The max latency that we support for site replication is really not a question. We are flexible to be able to handle latency from any reasonable network. It’s not a set on a stone number that 200-milliseconds latency is an acceptable or not acceptable range. We are very flexible with our solution. As long as we can have a fairly reliable connection, we can make up the latency and build that SoftNAS snap backup.

David:             Someone has a question here on RAID. What type of RAID is being used under SoftNAS?

Matt:             It’s built around RAID. We don’t tell you what types of RAIDs you have to use. It depends on what your situation is. If you’re inside of Microsoft Azure and you trust their local disk storages under enough level of AF4 that you are not going to have to worry about RAID in your solution or it’s not that much-pressing data, you can go ahead and use RAID 0 and get the fastest capabilities out of it.

However, if you’re on-premise and you don’t have a hardware RAID solution, we give you the ability to use up to RAID 7. If you wanted to use RAID 6 to give a really good performance and redundancy at the same time, you are welcomed to do that.

David:             I see Travis has another question there on the questions pane. How much would encryption inhibit or prevent deduplication benefits?

Matt:             That’s a tricky question. Deduplication actually happens on the fly, so we’re going to be doing the dedupe inline. Encryption is not going to come into play there. The encryption is going to happen on the actual container itself.

We are going to encrypt the channel itself and then whenever we drop the data in there it’s going to dedupe.

David:             A couple of more questions up here. Is it a good idea to use SoftNAS as a backup target? I think you covered that in one of our use cases there, I believe.

Matt:             Absolutely, that is one of our biggest use cases. Can we use it as a backup target? I guess I didn’t touch on it as much on the use-cases. I have done a previous webinar directly on this subject where we demonstrated how we went about using a Veem Backup solution from a Windows 2012 server and using SoftNAS as our target.

It is a great solution for backup solutions. We have used it here locally with a backup solution for SoftNAS incorporated. It is absolutely a perfect solution for that because we can provide fast access to any protocol that your backup solution needs.

David:             That’s right. That webinar, you can find on softnas.com within the webinars archive section if you’re interest in playing that back. Just the last question that I have here; does SoftNAS provide performance reports to show or to see hot versus cold data volumes?

Matt:             Absolutely. We do provide a dashboard that gives you access to all that data, so you actually can come here and see which data disks are getting hit the hardest, where we have data that’s just stored and asleep, basically never touched. We do have availability access for that dashboard to see that data and it reports in. And we can actually export that via SMTP server as well, so you can integrate it with SMTP or SNMP via things like What’s a gold or like product.

David:             I think that’s all the questions we have. That looks like that’s it. If you have any further questions as I mentioned, there’s a few places where you can contact SoftNAS. If you want to reach out and learn more or download a demo, please do that. I’m sure Matt and the team will be involved in that POC.

Matt, any addition to add at the end? Any common things that you see or what people should look out for or we’ve covered most of the areas?

Matt:             No, I think I got everything covered. But yes, if you do have any questions, please do not hesitate to contact us. My email address is mblanchard@softnas.com. I am more than willing to answer any questions you have about SoftNAS and assist you all in doing a free trial and setting this up and getting it running.

David:             Thanks, Matt. Thanks to everyone for attending. As you leave today, there will be a short survey. So if you can provide some feedback there, that also gives us an indication of any topics you’d like for future webinars.

As I mentioned at the start, recording and slides will be sent out very shortly. I hope to see you at the next webinar and have a good day. Thank you.