In recent years, people have obtained various information from digital images and videos, whether in the fields of entertainment, education, medicine, and traffic monitoring. For example, in the medical field, the problem of a high patient ratio is very difficult. Overcrowding in emergency departments is a serious global healthcare issue [
1]. It has always been difficult for hospitals to obtain real-time information on their patients’ critical circumstances. Emerging Internet of Things (IoT) frameworks enable us to create tiny devices capable of processing, sensing, and communicating [
2,
3]. Signal processing and signal collection frequently use image sensors. Image sensor arrays, including CMOS image sensors for visible light and the thermal imaging sensor, have been used more and more in a variety of applications due to their constant performance improvement and cost reduction [
4]. A growing number of buildings and huge surroundings are being monitored for pollution using image sensors, such as crop growth records, surveillance of road traffic, border monitoring, and monitoring of forest fires. These applications require real-time and high-quality performance to process the captured data with high resolution. However, the increasingly large amount of data has caused a heavy burden on transmission. To achieve transmission and maintain image quality, it is imperative to combine compression techniques. Because of this, this study proposes an image compression hardware circuit architecture with a high compression ratio, high efficiency, and low complexity. It aims to address the challenging issue of IoT huge data transmission and lower the expense of sampling redundant data.
Image compression coding algorithms can be divided into lossless and lossy methods. Lossless image compressions have better image performance after decompression than lossy image compression. For example, Chen et al. [
5,
6] designed a VLSI architecture for wireless body sensor network systems for wireless sensor networks (WBSNs). This design includes a simplified data encoder which can reduce data information by lossless compression. Video sensor networks (VSNs) are used to transmit high-quality video with huge data information such as high-efficiency video coding (HEVC) [
7]. However, high-quality lossless compression techniques are still limited for reducing huge data. In addition, lossy image compression algorithms have higher compression rates than lossless image compression algorithms. In the prior art, common image compression algorithms include JPEG [
8,
9,
10,
11,
12] and block truncation coding (BTC) [
13,
14,
15,
16,
17]. For the development of JPEG technology, JPEG-CHE [
10] decompresses precise data via compression history estimation (CHE), which is usually discarded after decompression. Ramesh et al. [
11] proposed a state-of-the-art method based on JPEG XS [
12] to directly compress the Bayer color filter array (CFA) data. JPEG technology’s complexity and compression ratio, meanwhile, are still not perfect. Delp et al. [
13] developed block truncation coding (BTC), a straightforward method with excellent compression rates, to reduce algorithm complexity. BTC is a suitable algorithm for hardware implementation. Based on hardware design, Bo et al. [
14] proposed an efficient image compression algorithm named Microshift. Hardware friendliness in design allows it to be implemented on FPGA and have high image quality. It adopts down-sampling to divide an image into nine sub-images which are more suitable for compression. The first sub-image is compressed by lossless compression and the other eight sub-images are predicted by it. Microsoft has good quality with higher PSNR. However, it does not have a higher compression rate. Adaptive sampling block compressed sensing (ABCS) is an adaptive sampling method used to process smooth, texture, and edge regions [
15]. Adaptive samples can improve the quality of different texture details. Li et al. [
16] adopted ABCS for the Green Internet of Things (GIoT) with low power consumption. Sovannarith et al. [
17] proposed a fuzzy adaptive sampling block compressed sensing (FABCS) which combined ABCS and a fuzzy logic system (FLS). This algorithm can be applied in wireless multimedia sensor network (WMSN) architecture and detect features to sample the base and feature layer. Then, the algorithm measures the compressed sensing and transmits it over to WMSN. The image can be reconstructed by FLS to adaptively adjust the sampling rate. The BTC algorithm substitutes the DCT and wavelet transform with two reconstruction values and splits the whole picture into non-overlapping blocks for computation. The BTC algorithm is suitable for hardware implementation by reducing complex calculations. According to the literature review, it is suggested that the lossy image compression technique is preferred for enhancing picture accuracy and it can be divided into four stages: conversion, prediction, quantization, and encoding. The formula of BTC is as shown in Equations (1) and (2):
where the low reconstruction value is represented by
, the high reconstruction value is represented by
, and the number of pixels above average is represented by
. The average value and standard deviation of each 4 × 4 block are represented by
and
, respectively.
In variable-length coding, the number of data occurrences can determine the code length. Data with a high probability of occurrence will be encoded into a shorter length. By contrast with a high probability of occurrence, data will be encoded into a longer length. The variable-length coding length is shorter than that of fixed-length encoding, which improves the compression rate. One of the most famous is the Huffman coding [
18], which is widely cited in image compression technology. Huffman coding creates the shortest code based on a binary tree. The Golomb code [
19] was invented by Solomon W. Golomb in 1960. It is another variable-length encoding that uses an adjustable parameter M to divide the input data into quotient and remainder. Although Huffman coding has a higher compression rate, it consumes a lot of time in calculations because it needs to calculate the entire image and count the probability before encoding. In addition, Huffman coding needs to store the code information for comparison during encoding and decoding. As a result, the hardware design requires additional memory as a code comparison table, which increases the area and cost. Chen et al. [
20] proposed a chip design for lossless image compression for wireless capsule endoscopy. This proposal selects Golomb–Rice coding to reduce area and real-time processes in hardware design. The study presented ultimately aims to improve image compression technology and make contributions to IoT devices. And this allows on-chip integration with image sensors to fulfill the requirement of high-speed applications. For example, WNSs tend to transmit data information in real-time [
21]. These are the innovations of this methodology:
The layout of this study is as follows: The Materials and Methods for the compression method are presented in
Section 2. The assessment techniques and experimental findings of compression technology and hardware use are mostly described and examined in
Section 3. The study’s conclusions are discussed in
Section 4. In
Section 5, conclusions and outlooks are presented. This proposal aims to address the issue of large-scale transmission of image sensors in IoT by using BTC and Golomb–Rice coding to conduct image compression with high compression rate and low complexity, and to achieve high performance through pipeline circuit design.