263 0

Reinforcement Learning for Dynamic Microfluidic Control

Title
Reinforcement Learning for Dynamic Microfluidic Control
Author
주재범
Keywords
FLOW; PLATFORM
Issue Date
2018-08
Publisher
AMER CHEMICAL SOC
Citation
ACS OMEGA, v. 3, no. 8, page. 10084-10091
Abstract
Recent years have witnessed an explosion in the application of microfluidic techniques to a wide variety of problems in the chemical and biological sciences. Despite the many considerable advantages that microfluidic systems bring to experimental science, microfluidic platforms often exhibit inconsistent system performance when operated over extended timescales. Such variations in performance are because of a multiplicity of factors, including microchannel fouling, substrate deformation, temperature and pressure fluctuations, and inherent manufacturing irregularities. The introduction and integration of advanced control algorithms in microfluidic platforms can help mitigate such inconsistencies, paving the way for robust and repeatable long-term experiments. Herein, two state-of-the-art reinforcement learning algorithms, based on Deep Q-Networks and model-free episodic controllers, are applied to two experimental "challenges," involving both continuous-flow and segmented-flow microfluidic systems. The algorithms are able to attain superhuman performance in controlling and processing each experiment, highlighting the utility of novel control algorithms for automated high-throughput microfluidic experimentation.
URI
https://pubs.acs.org/doi/10.1021/acsomega.8b01485https://repository.hanyang.ac.kr/handle/20.500.11754/119631
ISSN
2470-1343
DOI
10.1021/acsomega.8b01485
Appears in Collections:
GRADUATE SCHOOL[S](대학원) > BIONANOTECHNOLOGY(바이오나노학과) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE