1–10 von 74 Ergebnissen
|

Ihre Suche:
keywords:("Markov")

    Driving cycle electrification and comparison

    Ye, Yiming / Zhao, Xuan / Zhang, Jiangfeng | Elsevier | 2023
    Schlagwörter: Markov chain

    Experience-based territory planning and driver assignment with predicted demand and driver present condition

    Li, Yifu / Zhou, Chenhao / Yuan, Peixue et al. | Elsevier | 2023
    Schlagwörter: Markov decision process

    Passenger engagement dynamics in ride-hailing services: A heterogeneous hidden Markov approach

    Chen, Xian / Bai, Shuotian / Wei, Yongqin et al. | Elsevier | 2023
    Schlagwörter: Hidden Markov models

    Multi-Platform dynamic game and operation of hybrid Bike-Sharing systems based on reinforcement learning

    Shi, Ziyi / Xu, Meng / Song, Yancun et al. | Elsevier | 2023
    Schlagwörter: Dual-platform Markov decision process

    A policy gradient approach to solving dynamic assignment problem for on-site service delivery

    Yan, Yimo / Deng, Yang / Cui, Songyi et al. | Elsevier | 2023
    Schlagwörter: Semi-Markov decision process

    A cumulative risk and sustainability index for pavements

    Blaauw, Sheldon A. / Maina, James W. / Grobler, Louis J. et al. | Elsevier | 2022
    Schlagwörter: Markov chain Monte Carlo

    Dynamic bicycle relocation problem with broken bicycles

    Cai, Yutong / Ong, Ghim Ping / Meng, Qiang | Elsevier | 2022
    Schlagwörter: Markov decision process

    Reinforcement learning for logistics and supply chain management: Methodologies, state of the art, and future opportunities

    Yan, Yimo / Chow, Andy H.F. / Ho, Chin Pang et al. | Elsevier | 2022
    Schlagwörter: Markov decision process

    Integrated and coordinated relief logistics and road recovery planning problem

    Akbari, Vahid / Sayarshad, Hamid R. | Elsevier | 2022
    Schlagwörter: Markov decision process (MDP)

    The flying sidekick traveling salesman problem with stochastic travel time: A reinforcement learning approach

    Liu, Zeyu / Li, Xueping / Khojandi, Anahita | Elsevier | 2022
    Schlagwörter: Markov decision process