I think you do more harm in limiting the simulator than good, the emc vsa is heavily used among the blogging community for being a virtual san appliance for home labs.ĮMC gets a lot of exposure that way non customers use the Unisphere interface. When I want to compare deplication ratio's between netapp and emc to see which is the better product to continue to invest money into I can't use your simulator in that way. I have done deuplication testing with the EMC VNX virtual appliance on volumes with 10 terrabytes of data. The hardware appliance is always going to perform better becuase things have to be emulated in a virtual appliance. The EMC Celerra virtual appliance or as it use to be known as the EMC UBER VSA was updated to run the DART 7 code.I don't belive it has any limits. Many companies release virtual appliances of there products with minimal restrictions and I don't buy into people using an ontap simulator in production, or lost sales becuase the simulator had no restrictions. Consider creating a VM team or vApp that you can clone or deploy from template to make that easier. You may just need to create multiple POC simulated environments where you test only a portion of the entire solution set within each environment.Thin provision everything you can and keep an eye on the actual free space in the aggregate(s).If you know that dedupe or compression wouldn't work for your dataset, then consider splitting out that data into a separate volume and using the space efficiency features only on the data where they help. You may get a bit more capacity by putting everything into a single volume with dedupe and compression enabled, but that might not be what you want to test out and certainly isn't best practice for the solution areas you describe. Use the new DeDupe and Compression licenses to get more effective capacity out the volumes that you create.If you don't need the extra snapshot reserve, consider turning it down to make more of the capacity available to the active volumes.This goes against best practices for resiliency, but for the simulator that isn't a primary factor. Consider dumping everything into a single aggregate and maxing out the RAID group sizes. Do the calculations to see if using RAID4 would help relative to number of RAID groups and parity disks.Run through the capacity increate process to max out the capacity on the simulator.We understand that, and wish to make the simulator a great tool for demonstrating, learning, and testing the entire solution as you described. I have some suggestions, but as you imply it would be much easier to do with increased capacity limits. When it's on an SSD, the simulator performance is very good.ģ. In my experience, the DOT8 simulator runs as fast as the disk on which the VM lives. While we haven't done a formal performance timing profile for the simulator, I doubt the vNVRAM is the bottleneck for the DOT8 sim. I say "whopping" with a bit of humor, but that level of vNVRAM is sufficient for very decent performance from a simulator. You can see some of that in some of the console messages when the simulator boots. The virtual NVRAM has been increased 1600% to a whopping 32MB. Now for my question: I'm not familiar with the Celerra simulator what are its capacity limits? That might help us better determine where we should move our limits.Ģ. We are very unlikely to release an updated simulator off-cycle from Data ONTAP. I can't discuss discuss details or dates on this forum right now, but will say that making the simulator available with larger capacity will be tied to a future release of Data ONTAP. I can say that we are definitely discussing increasing the capacity limit on the simulator to match modern capacities and requirements. I'll address your questions below, and have a couple of questions of my own:ġ.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |