IDEAS home Printed from https://ideas.repec.org/a/gam/jftint/v17y2025i4p185-d1639317.html
   My bibliography  Save this article

Mitigating Quantization Errors Due to Activation Spikes in Gated Linear Unit-Based Large Language Models

Author

Listed:
  • Jaewoo Yang

    (Department of Applied Artificial Intelligence, Hanyang University at Ansan, Ansan 15588, Republic of Korea)

  • Hayun Kim

    (Department of Applied Artificial Intelligence, Hanyang University at Ansan, Ansan 15588, Republic of Korea)

  • Junyung Ji

    (Department of Applied Artificial Intelligence, Hanyang University at Ansan, Ansan 15588, Republic of Korea)

  • Younghoon Kim

    (Department of Applied Artificial Intelligence, Hanyang University at Ansan, Ansan 15588, Republic of Korea)

Abstract

Modern large language models (LLMs) achieve state-of-the-art performance through architectural advancements but require high computational costs for inference. Post-training quantization is a widely adopted approach to reduce these costs by quantizing weights and activations to lower precision, such as INT8. However, we identify a critical challenge in activation quantization for GLU (Gated Linear Unit) variants, which are commonly used in the feed-forward networks of modern LLMs like the LLaMA family. Specifically, severe local quantization errors arise due to excessively large activation magnitudes, which we refer to as activation spikes, leading to significant degradation in model performance. Our analysis reveals a systematic pattern of these spikes: they predominantly occur in the FFN (feed-forward network) layers at the early and late layers of the model and are concentrated on a small subset of tokens rather than being uniformly distributed across a token sequence. To mitigate this issue, we propose two empirical methods: Quantization-free Module (QFeM) and Quantization-free Prefix (QFeP), which isolate activation spikes during quantization. Extensive experiments demonstrated that our methods effectively improve activation quantization, particularly in coarse-grained quantization schemes, enhancing the performance of LLMs with GLU variants and addressing the limitations of existing quantization techniques. The code for implementing our methods and reproducing the experiments is publicly available our GitHub repository.

Suggested Citation

  • Jaewoo Yang & Hayun Kim & Junyung Ji & Younghoon Kim, 2025. "Mitigating Quantization Errors Due to Activation Spikes in Gated Linear Unit-Based Large Language Models," Future Internet, MDPI, vol. 17(4), pages 1-21, April.
  • Handle: RePEc:gam:jftint:v:17:y:2025:i:4:p:185-:d:1639317
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1999-5903/17/4/185/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1999-5903/17/4/185/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jftint:v:17:y:2025:i:4:p:185-:d:1639317. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.