Errors while loading the finished program from magnetic tape into the computer's 4kB of non-volatile core memory were an ever-present danger. The care with which the computer was programmed didn't end there. ![]() They created modules that could be verified mathematically as being correct and which each performed just one simple task. Interestingly, when writing the onboard software, IBM engineers pioneered the first use of the software engineering techniques that all avionics developers use today. The CPU was so simple that it only understood 16 different instructions, although the finished machine could execute 7,000 of those instructions per second. To create the Gemini computer system, IBM engineers soldered together hundreds of individual transistors, resistors and capacitors. Gemini's onboard computer had to help fly the craft during six distinct mission phases: pre-launch (where it monitored the health of both itself and other onboard systems), blast off, achieving a stable orbit, catching a drone (dubbed Agena), docking with it and finally negotiating a safe re-entry. Paradoxically, you have to aim lower than the other craft by a certain degree to catch it up, and that's difficult to do by hand and with limited fuel. For example, firing a rocket motor to get closer to another spacecraft actually puts your own ship into a higher orbit. ![]() The curved nature of orbit makes catching up to another craft and docking with it a confusing and dangerous procedure. High-speed InfiniBand and 10 Gigabit Ethernet networksĤx NVIDIA V100 GPUs with 32 GB of VRAM and NVLinkĭual Intel Xeon Cascade Lake Gold 6248 CPUs 20 cores each (2.50 GHz)ĭual 100-Gb HDR100 InfiniBand high-speed network interfacesģ.8-terabyte RAID protected NVMe drives, mounted as /lscratchĨx NVIDIA A100 GPUs with 40 gigabytes of VRAM and NVLinkĭual AMD EPYC Rome 7742 CPUs 64 cores each (2.25 GHz)ġ4 terabytes of RAID protected NVMe drives, mounted as /lscratchĭual-socket, 10-core Intel Xeon 2.The calculations required were no less advanced. Scalable Compute Unit 17 CPU-Only Nodes = Aspen Systems and Supermicro TwinPro nodesĩ0 petabytes of RAID disk capacity (combined total for all systems)ġ28-screen tiled LCD wall arranged in 8x16 configurationġ28 graphics processing units (Nvidia GeForce GTX 780 Ti)Ģ,560 Intel Xeon E5-2680v2 (Ivy Bridge) cores (10-core)ġ5 Samsung UD55C 55-inch displays in 5x3 configurationĢ dual-core Intel Xeon Harpertown processors per nodeĥ50+ Hypervisors – Intel Xeon Westmere, Ivy Bridge, Sandy Bridge, and Broadwell processor cores and AMD Rome and Milan processor cores Scalable Compute Unit 16 – CPU & GPU Nodesġ2 Supermicro GPU nodes, each with AMD EPYC Rome and 4 NVIDIA A100 GPUsĥ76 total AMD EPYC Rome processor cores (2.8 GHz) Intel Xeon Cascade Lake Refresh processor cores (2.4 GHz) Scalable Compute Unit 16 CPU-Only Nodes = Aspen Systems and Supermicro TwinPro nodes ![]() Scalable Compute Unit 15 = Aspen Systems and Supermicro TwinPro Rack Scale System Scalable Compute Unit 14 = Supermicro FatTwin Rack Scale System Intel Xeon Platinum 8280 Cascade Lake processors (2.7 GHz) GPU nodes: 3 racks (83 nodes total) enhanced with NVIDIA graphics processing units (GPUs)ġ,024 Intel Xeon Sandy Bridge cores and 684 Intel Xeon Skylake cores Intel Xeon Sandy Bridge E5-2670 processors (2.6 GHz) Intel Xeon Ivy Bridge E5-2680v2 processors (2.8 GHz) Intel Xeon Haswell E5-2680v3 (2.5 GHz) processors and Intel Xeon Broadwell E5-2680v4 processors (2.4 GHz) Ntel Xeon Gold 6148 Skylake processors (2.4 GHz) and Intel Xeon E5-2680v4 Broadwell processors (2.4 GHz)ĥ.95 petaflops LINPACK rating (#90 on June 2022 TOP500 list)ġ75 teraflops HPCG rating (#43 on June 2022 HPCG list) Intel Xeon Cascade Lake processors (2.5 GHz)ĥ.44 petaflops LINPACK rating (#53 on November 2020 TOP500 list)ġ06.54 teraflops HPCG rating (#349 on November 2020 HPCG list) ![]() Total HPCG rating: 172.38 teraflops (#44 on June 2022 HPCG list) Total LINPACK rating: 9.07 petaflops (#58 on June 2022 TOP500 list,) Information about HEC Systems and Related ResourcesĤ E-Cells (1,152 nodes), 16 Apollo 9000 racks (2,048 nodes) This table shows the systems and related resources at the NASA Advanced Supercomputing (NAS) Facility and the NASA Center for Climate Simulation (NCCS). + Home > About Us > Facilities & Services > Computing Systems Overview COMPUTING SYSTEMS OVERVIEW
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |