Learning Sparse Features for Self-Supervised Learning with Contrastive Dual Gating
The success of conventional supervised learning relies on large-scale labeled datasets to achieve high accuracy. However, annotating millions of data samples is labor-intensive and time-consuming. This promotes self-supervised learning as an attractive solution with artificial labels being used instead of human-annotated ones for training. Contrastive...
Published: 2/13/2025
|
Inventor(s): Jae-Sun Seo, Jian Meng, Li Yang, Deliang Fan
Keywords(s): Algorithm Development, Machine Learning, Performance Optimization, PS-Computing and Information Technology
Category(s): Computing & Information Technology, Physical Science
|
Temperature-Resilient RRAM-Based In-Memory Computing for DNN Inference
Deep neural networks (DNNs) have shown extraordinary performance in recent years for various applications, including image classification, object detection, speech recognition, etc. Accuracy-driven DNN architectures tend to increase model sizes and computations in a very fast pace, demanding a massive amount of hardware resources. Frequent communication...
Published: 2/13/2025
|
Inventor(s): Jae-Sun Seo, Jian Meng, Li Yang, Deliang Fan
Keywords(s): Algorithm Development, Machine Learning, Memory, Neural Computing, PS-Computing and Information Technology
Category(s): Computing & Information Technology, Physical Science
|
Hierarchical Coarse-Grain Sparsity for Deep Neural Networks
Background
Recurrent neural networks (RNNs) that enable accurate automatic speech recognition (ASR) are large in size and have long short-term memory (LSTM) capabilities. Due to the large size of these networks, most speech recognition tasks are performed in the cloud servers, which requires constant internet connection, introduces privacy concerns,...
Published: 2/13/2025
|
Inventor(s): Jae-Sun Seo, Deepak Kadetotad, Chaitali Chakrabarti, Visar Berisha
Keywords(s):
Category(s): Physical Science, Computing & Information Technology, Wireless & Networking
|
Binary Neural Network for Improved Accuracy and Defense Against Bit-Flip Attacks
Recently, Deep Neural Networks (DNNs) have been deployed in many safety-critical applications. The security of DNN models can be compromised by adversarial input examples, where the adversary maliciously crafts and adds input noise to fool a DNN model. The perturbation of model parameters (e.g., weight) is another security concern, one that relates...
Published: 2/13/2025
|
Inventor(s): Deliang Fan, Adnan Siraj Rakin, Li Yang, Chaitali Chakrabarti, Yu Cao, Jae-Sun Seo, Jingtao Li
Keywords(s): Algorithm Development, Artificial Intelligence, Cyber Security, Defense Applications, Machine Learning, Neural Computing
Category(s): Physical Science, Intelligence & Security, Wireless & Networking, Computing & Information Technology
|
SRAM Design with Embedded XNOR Functionality for Binary and Ternary Neural Networks
Background
Deep neural networks, and in particular convolutional neural networks, are being used with increasing frequency for a number of tasks such as image classification, image clustering, and object recognition. In a forward propagation of a conventional convolutional neural network, a kernel is passed over one or more tensors to produce one or...
Published: 2/13/2025
|
Inventor(s): Jae-Sun Seo, Shihui Yin, Mingoo Seok, Zhewei Jiang
Keywords(s):
Category(s): Physical Science, Computing & Information Technology
|
Hardware-Noise-Aware Training for Improved Accuracy of In-Memory-Computing-Based Deep Neural Networks
Background
Deep neural networks (DNNs) have been very successful in large-scale recognition tasks, but they exhibit large computation and memory requirements. To address the memory bottleneck of digital DNN hardware accelerators, in-memory computing (IMC) designs have been presented to perform analog DNN computations inside the memory. Recent IMC...
Published: 2/13/2025
|
Inventor(s): Sai Kiran Cherupally, Jae-Sun Seo, Deliang Fan, Shihui Yin, Jian Meng
Keywords(s): Algorithm Development, Artificial Intelligence, Electronics, Neural Computing
Category(s): Physical Science, Computing & Information Technology, Intelligence & Security
|
Programmable In-Memory Computing Accelerator for Low-Precision Deep Neural Network Inference
Background
In the era of artificial intelligence, various deep neural networks (DNNs), such as multi-layer perceptron, convolutional neural networks, and recurrent neural networks, have emerged and achieved human-level performance in many recognition tasks. These DNNs usually require billions of multiply-and-accumulate (MAC) operations, soliciting...
Published: 2/13/2025
|
Inventor(s): Jae-Sun Seo, Shihui Yin, Mingoo Seok, Bo Zhang
Keywords(s): Artificial Intelligence, Circuits, Computational Machine, Electronics, Machine Learning
Category(s): Physical Science, Computing & Information Technology, Intelligence & Security
|
Authentication and Secret Key Generation Using Electrocardiogram, Heart Rate Variability, and SRAM-Based Physical Unclonable Functions
Background
Traditional hardware designs for device authentication and secret key generation typically employ physical unclonable functions (PUFs) which generate unique random numbers based on static random-access memory (SRAM), delay, or analog circuit elements. Although silicon PUFs can be highly stable and unique, they do not represent liveliness....
Published: 2/13/2025
|
Inventor(s): Jae-Sun Seo, Shihui Yin, Sai Kiran Cherupally
Keywords(s):
Category(s): Physical Science, Computing & Information Technology, Medical Diagnostics/Sensors, Wireless & Networking
|
Resistive RAM Design with Embedded XNOR Computing
Background
Neuromorphic computing, or the concept of designing systems that mimic the adaptability and learning exhibited by biological neural processes, is the future of computing design. This type of computing is ideal for use in the Internet of Things (IoT) movement, which refers to the embedding of internet computing devices into everyday objects....
Published: 2/13/2025
|
Inventor(s): Jae-Sun Seo, Shimeng Yu
Keywords(s):
Category(s): Computing & Information Technology, Physical Science
|
Coarse-Grain Memory Sparsification for Small-Footprint Deep Neural Networks
Recent breakthroughs in deep neural networks (DNNs) have led to improvements in state-of-the-art speech applications. Conventional DNNs have hundreds or thousands of neurons in each layer, which require a large amount of memory to store the connections between neurons. Implementing these networks in hardware requires a large memory and high computation...
Published: 2/13/2025
|
Inventor(s): Jae-Sun Seo, Chaitali Chakrabarti, Sairam Arunachalam, Deepak Kadetotad
Keywords(s):
Category(s): Computing & Information Technology, Wireless & Networking
|