Biomedical Research Computing
Large-scale, data-driven biomedical research needs coordinated investment in research computing and so, in 2009, the Research Computing Core was created to develop such facilities. In 2017, the Core became a partnership with the newly opened Big Data Institute (BDI) with the vision of providing a unified Biomedical Research Computing (BMRC) platform across both departments as well as to collaborators from around Oxford. BMRC is now the pre-eminent facility in the University for data-driven, high-memory, high-performance computing. With the University adopting an increasingly federated approach to the provision of research computing, the BMRC facility already supports researchers from more than a dozen departments who all contribute to the continued growth of the facility.
With the new, wider remit for BMRC activity, the description of what we do, how we do it and how we charge for access is being centralised on the BMRC section of the Medical Sciences Division website (currently requires an Oxford SSO).
To contact the BMRC team for all support, advice and information email us at firstname.lastname@example.org.
Overview of Main Facilities
Currently, across the server rooms within WHG and BDI, BMRC offers nearly 7000 cluster compute cores, 60 cluster GPU cards, 10PB raw high-performance (Spectrum Scale) storage and 8PB raw lower-grade storage for data acquisition and archiving. We encourage all WHG researchers to use our platforms and offer training and workshops to help get people started. BMRC maintains an extensive set of >300 applications that researchers can use, confident that they are compiled and installed correctly.
In addition, we run an OpenStack cloud with 1200 cores and 40 GPUs backed by 300TB extreme performance NVMe Ceph and we are starting to encourage research groups to migrate to this platform.
BMRC also runs a smaller high-compliance secure VDI/cloud platform, a test-and-dev OpenStack cloud and a small oVirt VM platform. We are also exploring new storage platforms ranging from ultra-fast NVMe-over-fabric to bulk S3-compatible object storage. All these platforms are linked on our 100g Ethernet and EDR InfiniBand networks.