IDEAS home Printed from https://ideas.repec.org/a/gam/jeners/v17y2024i23p6063-d1535038.html
   My bibliography  Save this article

Domain-Specific Large Language Model for Renewable Energy and Hydrogen Deployment Strategies

Author

Listed:
  • Hossam A. Gabber

    (Faculty of Engineering and Applied Science, Ontario Tech University (UOIT), Oshawa, ON L1G 0C5, Canada)

  • Omar S. Hemied

    (Faculty of Engineering and Applied Science, Ontario Tech University (UOIT), Oshawa, ON L1G 0C5, Canada)

Abstract

Recent advances in large language models (LLMs) have shown promise in specialized fields, yet their effectiveness is often constrained by limited domain expertise. We present a renewable and hydrogen energy-focused LLM developed by fine-tuning LLaMA 3.1 8B on a curated renewable energy corpus (RE-LLaMA). Through continued pretraining on domain-specific data, we enhanced the model’s capabilities in renewable energy contexts. Extensive evaluation using zero-shot and few-shot prompting demonstrated that our fine-tuned model significantly outperformed the base model across renewable and hydrogen energy tasks. This work establishes the viability of specialized, smaller-scale LLMs and provides a framework for developing domain-specific models that can support advanced research and decision-making in the renewable energy sector. Our approach represents a significant step forward in applying LLMs to the renewable and hydrogen energy sector, offering potential applications in advanced research and decision-making processes.

Suggested Citation

  • Hossam A. Gabber & Omar S. Hemied, 2024. "Domain-Specific Large Language Model for Renewable Energy and Hydrogen Deployment Strategies," Energies, MDPI, vol. 17(23), pages 1-25, December.
  • Handle: RePEc:gam:jeners:v:17:y:2024:i:23:p:6063-:d:1535038
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1996-1073/17/23/6063/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1996-1073/17/23/6063/
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Karan Singhal & Shekoofeh Azizi & Tao Tu & S. Sara Mahdavi & Jason Wei & Hyung Won Chung & Nathan Scales & Ajay Tanwani & Heather Cole-Lewis & Stephen Pfohl & Perry Payne & Martin Seneviratne & Paul G, 2023. "Publisher Correction: Large language models encode clinical knowledge," Nature, Nature, vol. 620(7973), pages 19-19, August.
    2. Karan Singhal & Shekoofeh Azizi & Tao Tu & S. Sara Mahdavi & Jason Wei & Hyung Won Chung & Nathan Scales & Ajay Tanwani & Heather Cole-Lewis & Stephen Pfohl & Perry Payne & Martin Seneviratne & Paul G, 2023. "Large language models encode clinical knowledge," Nature, Nature, vol. 620(7972), pages 172-180, August.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Ching-Nam Hang & Pei-Duo Yu & Roberto Morabito & Chee-Wei Tan, 2024. "Large Language Models Meet Next-Generation Networking Technologies: A Review," Future Internet, MDPI, vol. 16(10), pages 1-29, October.
    2. Soroosh Tayebi Arasteh & Tianyu Han & Mahshad Lotfinia & Christiane Kuhl & Jakob Nikolas Kather & Daniel Truhn & Sven Nebelung, 2024. "Large language models streamline automated machine learning for clinical studies," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    3. Zhenjia Chen & Zhenyuan Lin & Ji Yang & Cong Chen & Di Liu & Liuting Shan & Yuanyuan Hu & Tailiang Guo & Huipeng Chen, 2024. "Cross-layer transmission realized by light-emitting memristor for constructing ultra-deep neural network with transfer learning ability," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    4. Yujin Oh & Sangjoon Park & Hwa Kyung Byun & Yeona Cho & Ik Jae Lee & Jin Sung Kim & Jong Chul Ye, 2024. "LLM-driven multimodal target volume contouring in radiation oncology," Nature Communications, Nature, vol. 15(1), pages 1-14, December.
    5. Chen Gao & Xiaochong Lan & Nian Li & Yuan Yuan & Jingtao Ding & Zhilun Zhou & Fengli Xu & Yong Li, 2024. "Large language models empowered agent-based modeling and simulation: a survey and perspectives," Palgrave Communications, Palgrave Macmillan, vol. 11(1), pages 1-24, December.
    6. Juexiao Zhou & Xiaonan He & Liyuan Sun & Jiannan Xu & Xiuying Chen & Yuetan Chu & Longxi Zhou & Xingyu Liao & Bin Zhang & Shawn Afvari & Xin Gao, 2024. "Pre-trained multimodal large language model enhances dermatological diagnosis using SkinGPT-4," Nature Communications, Nature, vol. 15(1), pages 1-12, December.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jeners:v:17:y:2024:i:23:p:6063-:d:1535038. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.