Thank you for your patience. Attached (MWYQ18814.pdf) you will find a quote for 90 Twin Squared chassis, which consists of 360 compute nodes. Each node has 64GB of 2133MHz DDR4 memory, two Xeon E5-2695v3 CPUs, and an SSD for scratch space. Each node also has onboard FDR InfiniBand. I realize that you asked for QDR, but since there is no offering that has onboard QDR, the alternative would be to use a server with no onboard InfiniBand and to add a discreet card. This option would actually cost slightly more (around $100 per chassis) than simply using onboard FDR InfiniBand, which is why we eschewed it. The one downside is that, as with all onboard devices (RAID, Ethernet, etc), if the device fails, the whole board must be replaced. Let me know if you have a preference for one of these two options. You'll also noticed a second quote, MWYQ18816.pdf. I wanted to show you the price difference that a less expensive CPU like the ten-core Xeon E5-2650v3 could offer. Because it has fewer cores, in order to reach the 10,000 that you wanted, we had to use 125 chassis (500 nodes). One benefit is that the cluster would have a larger total memory pool (64,000GB instead of 46,080GB). There are some downsides, though. First, the additional nodes would require extra power and cooling. I've attached a rough cost estimate (PPPL_Xeon_cluster_power_cooling_estimate.xlsx) for five years. I've labeled parts of the model that are assumptions, although we do our best to base our estimates on research or our own experiences. The interest rate/cost of capital may or may not be close to reality, but that is a number for the finance department to decide. As with all of our assumptions, you can feel free to change them and adjust the model. Another potential cost to adding additional nodes is the InfiniBand fabric. If you've got a 400-node Mellanox QDR InfiniBand chassis switch, for example, the cost of going from 360 to 500 nodes might be substantial. In addition to the missing InfiniBand switch(es), you'll notice that PDUs, racks, a head node, and storage are also absent. Are you intending to re-use any existing PDUs and racks? What sort of storage requirements do you anticipate having? Also, for a cluster of this size, extra care should be given to cooling. Thus, we will have to discuss what options are available to you in this area as well. Finally, I would like to point out that the pricing here is still very preliminary. For a quantity of servers this size, we would negotiate with our suppliers in order to get better pricing for you. These figure should at least provide you with a ball park estimate. Please let us know what questions you have. We view cluster configuration as an iterative process, so please feel free to make any suggestions as well. Thank you once again.