IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2406.08013.html
   My bibliography  Save this paper

Deep reinforcement learning with positional context for intraday trading

Author

Listed:
  • Sven Goluv{z}a
  • Tomislav Kovav{c}evi'c
  • Tessa Bauman
  • Zvonko Kostanjv{c}ar

Abstract

Deep reinforcement learning (DRL) is a well-suited approach to financial decision-making, where an agent makes decisions based on its trading strategy developed from market observations. Existing DRL intraday trading strategies mainly use price-based features to construct the state space. They neglect the contextual information related to the position of the strategy, which is an important aspect given the sequential nature of intraday trading. In this study, we propose a novel DRL model for intraday trading that introduces positional features encapsulating the contextual information into its sparse state space. The model is evaluated over an extended period of almost a decade and across various assets including commodities and foreign exchange securities, taking transaction costs into account. The results show a notable performance in terms of profitability and risk-adjusted metrics. The feature importance results show that each feature incorporating contextual information contributes to the overall performance of the model. Additionally, through an exploration of the agent's intraday trading activity, we unveil patterns that substantiate the effectiveness of our proposed model.

Suggested Citation

  • Sven Goluv{z}a & Tomislav Kovav{c}evi'c & Tessa Bauman & Zvonko Kostanjv{c}ar, 2024. "Deep reinforcement learning with positional context for intraday trading," Papers 2406.08013, arXiv.org.
  • Handle: RePEc:arx:papers:2406.08013
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2406.08013
    File Function: Latest version
    Download Restriction: no
    ---><---

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2406.08013. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.