The Latest Trends in Hardware Development and the Challenges of Data Management
Advancements in hardware and software technologies have greatly impacted the development of data management and analysis systems. The performance of software can benefit from hardware advancements, but is also limited by hardware characteristics. Therefore, software frameworks and systems must be designed with trade-offs in mind. Hardware advancements are driven by demands for software performance, and for data management and analysis systems, hardware determines performance limits.
Software must optimize algorithms and data structures to fully utilize hardware. However, traditional data management techniques are facing unprecedented challenges buoyed by demands of big data processing. Upper software systems design including architecture selection and optimization techniques largely depends on computer hardware. The advent of high-performance processors, hardware accelerators, nonvolatile memory, and high-speed networks require data analysis software. New hardware will break through the architecture of the computer system and require adaptations in software design.
The Hardware Trend
Recent advancements in storage, processors, and network technologies are changing the traditional environment of data management and analysis. New hardware and features such as high-performance processors, hardware accelerators, and NVM, RDMA-architecture are becoming the foundation of future computing platforms. These multi-core trends make software design more upscale.
Processor Technology Trend
Over the past 40 years, processor technology has moved from scale-up to scale-out, with a focus on creating more cores per processor rather than increasing clock speed. This has resulted in multi-core parallel processing technology becoming mainstream and although this has significantly improved data processing functionality, software cannot automatically benefit from this change. Programmers must transform traditional serial programs into parallel programs multi-core processor optimization. Furthermore, in order to meet the demands of highly concurrent applications, processors are being specifically optimized for individual applications. This has led to the development of dedicated hardware accelerators such as GPU, Xeon Phi, and FPGA, which can efficiently offload parts of compute-intensive and data-intensive workloads from the CPU.
Storage Technology Trends
As processor and hardware accelerator technologies develop rapidly, the performance gap between CPU and storage continues to widen, making data access a non-negligible performance bottleneck. However, the new storage medium, NVM, provides a potential avenue to break the I/O bottleneck. NVM, which includes PCM, MRAM, RRAM, and FeRAM, has the dual capabilities of both DRAM-like high-speed access and disk-like persistence, effectively weakening the “performance wall” of traditional storage mediums. The development of new storage technologies has also had a significant impact on processor technology, with 3D stacking technology delivering high-performance data cache support for powerful parallel processing. This, combined with NVM technology, will change the existing storage hierarchy and optimize data access critical paths to bridge the performance gap between storage tiers.
Network Technology Trends
Network I/O bottlenecks are also a major performance issue in the datacenter. Traditional Ethernet networks are unable to meet the demands of data-intensive applications, and high-performance network technologies such as InfiniBand and RDMA over Converged Ethernet (RoCE) have become increasingly popular. These technologies provide low-latency and high-bandwidth communication between nodes, and support remote direct memory access (RDMA), which allows data to be transferred directly from the memory of one computer to another, avoiding the need for data to be copied multiple times.
New Hardware Design System Layout and Plans in Platforms
The emergence of high-performance processors and new accelerators has led to a shift in processing architectures, from single-CPU to hybrid processing architectures. While traditional software-based data processing models have reached a mature stage, software and hardware design is being explored to expedite performance. The unique properties of NVM present new constraints and requirements for data management systems, and NVM-specific architecture is necessary. Additionally, RDMA-enabled networks are changing assumptions in traditional distributed data management systems, and a fundamental hardware design will help exploit the benefits of high-performance networks fully.
New Hardware Storage and Indexing Techniques in Platforms
New hardware, such as NVM, offers a variety of storage options, making data management and analysis complex. NVM can replace or work alongside traditional storage media, such as RAM and HDD, reducing read/write latency across storage tiers. Two ways to abstract NVM include the persistence heap and file system. NVM abstraction can be a basic building block for upper data processing technology, but changes and improvements in upper data management systems are essential for high-level features. Different performance characteristics of NVMs affect data access and processing frameworks of heterogeneous processor architecture. Indexes are key technologies for efficiently organizing data to accelerate upper-level data analytics. Existing indexes for traditional storage media can be ineffective in NVM environments due to frequent updates reducing their lifespan and degrading their performance. Merge updates or late updates are typical approaches used to reduce frequent updates and writes of small amounts of data. Future indexing technologies for NVM should control the read and write pathways and impact areas resulting from index updates and allow layered indexing techniques for the NVM storage hierarchies.
New Hardware Query Processing and Optimization in Platforms
Traditional query algorithms and data structures are inappropriate for the non-volatile memory (NVM) storage environment, and reducing NVM-oriented writes is necessary for optimization. Several technologies have been used to optimize NVM writes, including write avoidance, write cancellation, and write pausing strategies. Algorithm fortification can occur from a higher level by using extra caches or low-overhead NVM reads.
Query optimization technologies have evolved with hardware development, with significant differences in optimization goals. Currently, this technique is heavily reliant on underlying hardware, from SIMD to GPUs and FPGAs. However, current techniques for new hardware lack holistic consideration for evolving hardware, and algorithms, requiring regular alterations to accommodate various hardware features.
New Hardware furnished Transaction Processing in Platforms
Transaction processing in DBMS involves recovery and concurrency control, which are closely related to the storage and computing environment. With the advent of Non-Volatile Memory (NVM), recovery methods based on Write-Ahead Logging (WAL) can face new threats. NVM write operations can be guaranteed by hardware-level primitives and optimizations on the processor cache. Serialization of data written into NVM can be ensured by memory barriers, but it degrades transactional throughput based on WAL. Existing logging technologies for NVM are stop-gap solutions, and the ultimate solution is to develop logging technology on a pure-NVM environment.
Concurrency control is closely related to the underlying storage environment. The overhead incurred by the lock manager becomes a bottleneck in NVM storage environments. Approaches such as latch-free data structures, lightweight concurrency primitives, and distributed lock managers have been proposed to reduce the number of locks. An intermediate layer constructed by low-latency NVM can function in a hybrid NVM environment to decouple the relationship between the physical representation and the logical representation of the multi-version log.
The scalability of distributed transactions is central to the building of distributed transactional systems. Most of these technologies are not transparent to application developers and require preprocessing at the application layer and control. With RDMA-enabled high-performance networks, hardware limitations and software costs are expected to be fully mitigated, addressing the two most important difficulties encountered with traditional distributed transaction scaling: limited bandwidth and high CPU overhead.
Research Conundrums and Pathways for Future Studies
We will explore the challenges and future research directions of data management and analytics systems. Firstly, new hardware and environment introduce new performance bottlenecks, examinable in a higher-level context. Secondly, the design philosophy of algorithms and data structures must be changed to exploit new hardware characteristics. Thirdly, new hardware and environments have a deep and crosscutting impact on data management systems.
New hardware environments such as heterogeneous computing architectures, hybrid storage environments, and high-performance networks will create significant opportunities for further studies. These studies will hinge on four aspects:
- Developing a lightly coupled system architecture and collaborative design scheme to integrate new hardware into data management stacks
- Exploring storage and index management with mixed heterogeneous hardware environments.
- Designing hardware-aware query processing and performance optimization
- Creating new hardware-enabled transaction processing technologies for concurrency control and recovery that ensure transaction isolation and persistence.
New hardware and its environment affect computing system architecture and software assumptions. While providing better performance, data management and analysis software must adapt to new hardware features. Complex trade-offs require breaking from traditional data management and exploring new processing modes and architectures from the bottom up.