Author
Listed:
- Mingwei Wang
- Decui Liang
- Zeshui Xu
Abstract
Group opinion often has an important influence on the development and decision-making of major events. However, there are existing two problems with group opinion: (1) As opinions evolve, group opinion may diverge sharply, which is not conducive to obtaining final decision opinion. (2) The evolution of opinions can also cause serious systemic biases in group, which can lead to a final decision that is far from the truth. Hence, this paper deeply investigates two important strategies of consensus boost and opinion guidance in opinion management. Meantime, considering the urgency of some decision-making problems, such as major public crisis events, opinion management process is also subject to time constraint. In this paper, we firstly formalize the minimum adjustment cost consensus boost and opinion guidance with time constraint as Markov decision process because of the intersection and evolution rule of opinions described by opinion dynamics holds Markov property. In this case, the minimum adjustment cost can improve the efficiency of opinion management. We further propose consensus boost algorithm and opinion guidance algorithm based on deep reinforcement learning, which availably mirrors human learning by exploring and receiving feedback from opinion dynamics. Then, by combining the above-mentioned algorithms, we design a new opinion management framework with deep reinforcement learning (OMFDRL). Finally, through comparison experiments, we verify the advantages of our proposed OMFDRL, which can provide managers with more flexible usage conditions.
Suggested Citation
Mingwei Wang & Decui Liang & Zeshui Xu, 2022.
"Consensus achievement strategy of opinion dynamics based on deep reinforcement learning with time constraint,"
Journal of the Operational Research Society, Taylor & Francis Journals, vol. 73(12), pages 2741-2755, December.
Handle:
RePEc:taf:tjorxx:v:73:y:2022:i:12:p:2741-2755
DOI: 10.1080/01605682.2021.2015257
Download full text from publisher
As the access to this document is restricted, you may want to search for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:taf:tjorxx:v:73:y:2022:i:12:p:2741-2755. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Longhurst (email available below). General contact details of provider: http://www.tandfonline.com/tjor .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.