News
- Our paper titled “NeuroSpector: Systematic Optimization of Dataflow Scheduling in DNN Accelerators” is accepted to IEEE Transactions on Parallel and Distributed Systems. Well done, Chanho and Bogil!
- Our paper titled “LAS: Locality-Aware Scheduling for GEMM-Accelerated Convolutions in GPUs” is accepted to IEEE Transactions on Parallel and Distributed Systems. Congrats, Hyeonjin!
- Jeongmin, Semin, and Suan have successfully defended their MS theses. Congratulations, and wish you the best in your future career paths.
- Our paper titled “NOMAD: Enabling Non-blocking OS-managed DRAM Cache via Tag-Data Decoupling” is accepted to HPCA 2023. Congrats, Youngin and Hyeonjin!
- A paper titled “SnakeByte: A TLB Design with Adaptive and Recursive Page Merging in GPUs” is accepted to HPCA 2023. Kudos to all authors!
- William receives a Teaching Excellence Award from the College of Engineering, Yonsei University, in Apr. 2022.
- Sungmin has successfully defended his MS thesis titled “Optimization of Reconfigurable Deep Neural Network Accelerators Using Bottom-Up Mapping and Energy Prediction.” Congrats!
- Our white paper titled “NPUsim: Full-System, Cycle-Accurate, Functional Simulations of Deep Neural Network Accelerators” is accepted to the US DOE Workshop on Modeling and Simulation of Systems and Applications (ModSim) 2021. Congrats, Bogil!
- Sungjae has successfully defended his MS thesis titled “Exploiting Large and Small Page Sizes in Two-Tiered Memory System.” He joined NAVER for the first step of his career. Congrats!
- Our paper titled “Energy-Efficient Acceleration of Deep Neural Networks on Realtime-Constrained Embedded Edge Devices” is accepted for publication at IEEE Access. Congrats, Bogil and Sungjae!
- A paper titled “Thread-Aware Area-Efficient High-Level Synthesis Compiler for Embedded Devices” is accepted to CGO 2021.
- A paper titled “The Nebula Benchmark Suite: Implications of Lightweight Neural Networks” got accepted to IEEE Transactions on Computers. Kudos to Bogil and co-authors.
- Our paper titled “Duplo: Lifting Redundant Memory Accesses of Deep Neural Networks for GPU Tensor Cores” is accepted to MICRO 2020. Congrats, Hyeonjin!
- Our submission to the US DOE Workshop on Modeling and Simulation of Systems and Applications (ModSim) 2020 is accepted. Bogil gives an invited talk titled “Nebula: Lightweight Neural Network Benchmarks” in Aug. 2020.
- William receives a Teaching Excellence Award from the College of Engineering, Yonsei University, in Feb. 2020.
Openings
Our lab is looking for highly self-motivated and brilliant students broadly interested in computer systems and architecture, including but not limited to:
- Neural accelerators
- Memory systems, processing in/near memory
- GPU microarchitecture for machine learning
- Quantum computing and circuit simulations
- Power, thermal, reliability management
The following courses are relevant to our research interests. Students looking for lab opportunities are recommended to take these courses (but not required to take all of them). Strong programming skills are mandatory, e.g., C++, Python, Perl, Verilog. A good candidate should have a minimum GPA of 3.5/4.3, or otherwise the applicant must prove competence.
- EEE3530 or CSI3102 Computer Architecture
- EEE3535 or CSI3101 Operating Systems
- EEE3540 Microprocessors
- EEE3544 System IC Design
- EEE3314 or CSI4108 Artificial Intelligence
- EEE5501 Advanced Programming
- EEE6504 or CSI4104 Compilers
- EEE6510 or CSI6532 Advanced Computer Architecture
Eligibility: Applicants interested in joining the lab must have legal residence in South Korea prior to contacting the lab. Please, do not send us emails unless you are legally present in South Korea.
Contact: William J. Song (Office: Eng-C410, Email: wjhsong {\at} yonsei {\dot} ac {\dot} kr, Phone: 2123-2864)