یادگیری به سمت تعادل شبکه حمل و نقل
ترجمه نشده

یادگیری به سمت تعادل شبکه حمل و نقل

عنوان فارسی مقاله: یادگیری به سمت تعادل شبکه حمل و نقل: یک مطالعه مقایسه مدل
عنوان انگلیسی مقاله: Learning Towards Transportation Network Equilibrium: A Model Comparison Study
مجله/کنفرانس: دسترسی – IEEE Access
رشته های تحصیلی مرتبط: مدیریت
گرایش های تحصیلی مرتبط: مدیریت منابع اطلاعاتی
کلمات کلیدی فارسی: بازی انتخاب مسیر، تجربه آزمایشگاهی، یادگیری تقویتی، هماهنگی ضمنی، تعادل نش
کلمات کلیدی انگلیسی: Route choice game, laboratory experiment, reinforcement learning, tacit coordination, nash equilibrium
نوع نگارش مقاله: مقاله پژوهشی (Research Article)
شناسه دیجیتال (DOI): https://doi.org/10.1109/ACCESS.2019.2949576
دانشگاه: School of Information Management, Wuhan University, Wuhan 430072, China
صفحات مقاله انگلیسی: 11
ناشر: آی تریپل ای - IEEE
نوع ارائه مقاله: ژورنال
نوع مقاله: ISI
سال انتشار مقاله: 2019
ایمپکت فاکتور: 4.641 در سال 2018
شاخص H_index: 56 در سال 2019
شاخص SJR: 0.609 در سال 2018
شناسه ISSN: 2169-3536
شاخص Quartile (چارک): Q2 در سال 2018
فرمت مقاله انگلیسی: PDF
وضعیت ترجمه: ترجمه نشده است
قیمت مقاله انگلیسی: رایگان
آیا این مقاله بیس است: خیر
آیا این مقاله مدل مفهومی دارد: ندارد
آیا این مقاله پرسشنامه دارد: ندارد
آیا این مقاله متغیر دارد: ندارد
کد محصول: E13919
رفرنس: دارای رفرنس در داخل متن و انتهای مقاله
فهرست مطالب (انگلیسی)

Abstract

I. Introduction

II. A Brief Literature Review

III. Data and Models

IV. Model Comparison

V. Results

Authors

Figures

References

بخشی از مقاله (انگلیسی)

Abstract

As an interdisciplinary topic, human travel-choice behavior has attracted the interests of transportation managers, theoretical computer science researchers and economists. Recent studies on tacit coordination in iterated route choice games (i.e., a large number of subjects could achieve the transportation network equilibrium in limited rounds) have been driven by two questions. (1) Will learning behavior promote tacit coordination in route choice games? (2) Which learning model can best account for these choices/behaviors? To answer the first question, we choose a set of learning models and conduct extensive simulations to determine their success in accounting for major behavioral patterns. To answer the second question, we compare these models to one another by competitively testing their predictions on four different datasets. Although all the selected models account reasonably well for the slow convergence of the mean route choice to equilibrium, they account only moderately well for the mean frequencies of the roundto-round switches from one route to another and fail to appropriately account for substantial individual differences. The implications of these findings for model construction and testing are briefly discussed.

Introduction

In both transportation and communication networks, where the route choices are decentralized, utility-maximizing players facing strategic uncertainty often strive to avoid congestion [1]. Examples include choosing a restaurant on Saturday evening, selecting of a route in a traffic network, and deciding whether to enter a capacitated market. The notion of equilibrium in such scenarios, once they are modeled appropriately as non-cooperative n-person games, leads us naturally to ask how players achieve this ‘‘meeting of the minds.’’ The focus of the present paper is on the choice of routes in directed networks. We focus on computer-controlled experimental studies of a class of network games, called iterative route choice games. These games have multiple equilibria that, depending on the architecture of the network and the number of network users, are counted in thousands or occasionally in millions. The study of such games falls in the intersection of behavioral economics, transportation science [2], computer science [3], and operations management [4]. If tacit coordination in large groups is neither reached by communication nor deduced by introspection, then it is achieved by learning ‘‘day by day’’ [5], [6]. Most previous experimental studies of route choice games largely support this assertion [3].