IDEAS home Printed from https://ideas.repec.org/a/bjc/journl/v10y2023i11p634-643.html
   My bibliography  Save this article

A Deep Learning Model to Generate Image Captions

Author

Listed:
  • Shashank Parmar

    (Department of Software Engineering, Delhi Technological University, New Delhi, India)

  • Raman Tyagi

    (Department of Software Engineering, Delhi Technological University, New Delhi, India)

  • Prince Kumar Dhankar

    (Department of Software Engineering, Delhi Technological University, New Delhi, India)

Abstract

How computers can automatically describe the substance of photographs using human language is a topic that interests us greatly. We choose to use the most advanced image caption generator currently available to obtain a deeper understanding of this computer vision topic. Show, attend and tell Visually attentive neural image caption generator [12]. Our machine learning-based neural network picture description generator is created in Python using the Pytorch ML framework. In our workflow, we’ve determined ï¬ ve key elements: Data preparation, a convolutional neural network (CNN) it helps in encoding, a recurrent neural network (RNN) it helps in decoding, a beam search to determine the best description, generation of sentences, and assessment make up the R1-R6 framework. The quality and correctness of the generated caption are evaluated using the BLEU-4 score. Each member of our group contributed equally to moving the project forward as we distributed the ï¬ ve elements mentioned above equally among ourselves. All ï¬ ve components have been completed successfully, and we can now use Kaggle Notebook to train our network. After the network has been trained and is performing satisfactorily, we continue to see the attention mechanism.

Suggested Citation

  • Shashank Parmar & Raman Tyagi & Prince Kumar Dhankar, 2023. "A Deep Learning Model to Generate Image Captions," International Journal of Research and Scientific Innovation, International Journal of Research and Scientific Innovation (IJRSI), vol. 10(11), pages 634-643, November.
  • Handle: RePEc:bjc:journl:v:10:y:2023:i:11:p:634-643
    as

    Download full text from publisher

    File URL: https://www.rsisinternational.org/journals/ijrsi/digital-library/volume-10-issue-11/634-643.pdf
    Download Restriction: no

    File URL: https://rsisinternational.org/journals/ijrsi/articles/a-deep-learning-model-to-generate-image-captions/
    Download Restriction: no
    ---><---

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:bjc:journl:v:10:y:2023:i:11:p:634-643. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Dr. Renu Malsaria (email available below). General contact details of provider: https://rsisinternational.org/journals/ijrsi/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.