Agent-based error prevention algorithms

April 26, 2018 | Author: Anonymous | Category: Documents
Report this link


Description

ern A m seq gori eve ecis adi oyin tabi d om 1. Introduction Errors have existed before, occur now, an ducts a rrors b critica enterp s. Insid s whic ected, erprise cols has been developed (Lara & Nof, 2003; Nof & Chen, 2003; Yang, Chen, & Nof, 2005). We have recently developed algorithms to de- tect and predict errors in the process of providing single product/ service (Chen & Nof, 2007), and a conflict and error detection and prediction model to study the time requirement of detection and prediction (Chen & Nof, 2010). The work reported here has extended previous research and developed agent-based error developed in this research are the first algorithms to dynamically prevent errors in producing multiple products and services; (2) agent-based intelligent algorithms. Algorithms developed previ- ously for error prevention in production and service are mostly centralized approaches which do not require collaboration between distributed agents. They often have poor reliability or pre- ventability. The AEPAs developed in this research are agent-based algorithms that can intelligently prevent errors with the support of accurate information obtained by distributed agents locally or through communication. Table 1 summarizes the difference between the AEPAs and other error prediction and detection ⇑ Corresponding author. Tel.: +1 618 650 2853; fax: +1 618 650 2555. E-mail addresses: [email protected] (X.W. Chen), [email protected] (S.Y. Nof). Expert Systems with Applications 39 (2012) 280–287 Contents lists availab w .e 1 Tel.: +1 765 494 5427; fax: +1 765 494 1299. or services is not affected by customers’ faults, i.e., they are fault tolerant; it is viewed as an attractive feature. Except for these two situations, errors are not tolerated and need to be detected and prevented. The main objectives of error detection and preven- tion are (1) improving the quality of products/services; (2) elimi- nating waste due to errors; (3) improving productivity and efficiency of production/service. A theory of error and conflict detection with agents and proto- The main contribution of this research is twofold: (1) dynamic error-prevention. Most error prevention methods in production and service are static. For instance, humans prevent errors through violation of expectations (Klein, 1997); the centralized prediction algorithm is executed before production or service starts. Various methods have been developed to automate error prevention (Chen & Nof, 2009) whereas there is limited study and implementation of dynamic error-prevention for production and service. The AEPAs as long as activities of providing pro customers’ expectations. Detecting e vices are delivered to customers is are sometimes acceptable inside an of products or services by customer occurring in early stages of a proces in final products or services, if corr extent. Outside of the supplying ent 0957-4174/$ - see front matter � 2011 Elsevier Ltd. A doi:10.1016/j.eswa.2011.07.018 d will continue to exist nd services must meet efore products and ser- l for enterprises. Errors rise and during the use e an enterprise, errors h will not cause errors are tolerated to certain , if the use of products prevention algorithms (AEPAs) to prevent errors that occur in pro- ducing multiple products/services. These errors can be prevented if they are predicted before they occur. The AEPAs are prevention algorithms and are compared with two commonly used, central- ized error prediction and detection algorithms. Analytical studies and experiments results show that the AEPAs employing nominal and optimistic decision rules outperform the centralized prediction and detection algorithms in terms of preventability and reliability. Collaboration among agents improves AEPAs’ performance. It is recommended to prevent errors by two agents simultaneously executing the AEPA employing the integrated nominal rule. � 2011 Elsevier Ltd. All rights reserved. Agent-based error prevention algorithms Xin W. Chen a,⇑, S.Y. Nof b,1 aDepartment of Industrial and Manufacturing Engineering, School of Engineering, South b School of Industrial Engineering, Purdue University, West Lafayette, IN 47907-2023, US a r t i c l e i n f o Keywords: Agent-based algorithms Collaboration Distributed control Error analysis Fault diagnosis Prevention a b s t r a c t This article presents a new production and service. A reveal if the distributed al tion. Agent-based error pr and prevent errors with d to compare AEPAs with tr show that the AEPAs empl rithms in terms of preven mance. It is recommende employing the integrated n Expert Systems journal homepage: www ll rights reserved. Illinois University Edwardsville, Edwardsville, IL 62026-1805, USA ethodology using distributed algorithms to identify and prevent errors in uential production/service line is selected to challenge the analysis, and thms can outperform centralized algorithms in automating error preven- ntion algorithms (AEPAs) are developed for distributed agents to identify ion rules. Analytical studies and simulation experiments are conducted tional centralized error prediction and detection algorithms. The results g nominal and optimistic rules perform better than the centralized algo- lity and reliability. Collaboration among agents improves AEPAs’ perfor- to prevent errors by two agents simultaneously executing the AEPA inal rule. le at ScienceDirect ith Applications lsevier .com/locate /eswa materials or semi-finished products/services enter a Co-net and are processed by agent 1 through agent n sequentially. Final prod- ucts/services exit the Co-net following the work completed by the last agent (agent n). An agent sends its information to the succeed- for complex system topologies. A sequential Co-net with three Table 1 Comparison between AEPAs and other error prevention methods. Comparison AEPAs Other Error Prediction and Detection Methods Table 2 Comparison between quality control methods and AEPAs. X.W. Chen, S.Y. Nof / Expert Systems with Applications 39 (2012) 280–287 281 methods developed earlier (e.g., Kanawati, Nair, Krishnamurthy, & Abraham, 1996; Klein, 1997; Roos, ten Teije, & Witteveen, 2003; Steininger & Scherrer, 1997; Svenson & Salo, 2001). There are substantial differences between quality control meth- ods and AEPAs. Quality control methods are focused on the pro- cesses used to produce products/services with tools including control charts, capability analysis, experimental design, sampling inspection, and total quality management. Quality control methods Comparison Quality control methods AEPAs Objective Ensure that processes work effectively Prevent errors to meet customers’ requirements Methodology Control charts, capability analysis, experimental design, sampling inspection, and total quality management Distributed algorithms executed by intelligent agents Function Detect errors in processes Prevent errors when objectives to produce certain amount of products/services are not satisfied Work system Automated with dynamic input obtained by distributed agents in real time Mostly manual; automated with static input Methodology Agent-based, intelligent algorithms; agents collaborate to prevent errors with accurate information Centralized without collaboration between distributed agents answer some but not all important questions to a production/ser- vice system. For instance, quality control methods do not answer questions such as (1) has a system provided or will a system pro- vide the amount of products/services required by customers? (2) Does each process step in a production/service system have suffi- cient raw materials to produce the amount of products/services re- quired by customers? Even if a process is ‘‘under control’’ with the help of quality control methods, it is not guaranteed that the pro- cess meets customers’ requirements. The AEPAs are developed to prevent errors that occur when a production/service system’s objectives to produce a certain amount of products/services are not satisfied. Table 2 summarizes the differences between quality control methods and AEPAs. 2. Problem definition and related work 2.1. Research assumption and case study This research studies sequential production/service lines (Fig. 1) in which final products/services are obtained as the result of the work completed by a group of agents that are organized as a coor- dination network (Co-net) according to a certain sequence. Raw Semi-finished products/services Agent 1 Agent 2 Raw materials Raw materials Fig. 1. Sequential produ agents is used to illustrate research definitions. In the case study, products/services are processed first by agent 1, then agent 2, and finally agent 3. The three agents are organized as a Co-net. The Co-net objective is to produce 900 qualified products/services within 100 min. 2.2. Research definitions 2.2.1. Agent Agents have been used in software engineering and are emerging in other automation areas (Nof, 2003). Agent technology enables automatic error detection and prediction and reduces detection cost (Huang, Ceroni, & Nof, 2000; Huang & Nof, 1999). In produc- tion/service, an agent refers to any entity that is capable of executing tasks and communicating with other entities. Agents must be defined (Huang&Nof, 2000) and can augment humanwork abilities through collaboration with other entities following protocols. Each agent has a set of objectives each of which can be denoted by u(i, j, t). i is the index of agents and j is the index of agent objectives. t is the time when the objective needs to be satisfied. In the case study, u(1, 1, 100) = u(2, 1, 100) = u(3, 1, 100) = 900. 2.2.2. Co-net A Co-net is a network that enables collaboration among a group of agents (Yang et al., 2005). A Co-net has a set of objectives each of which is denoted by u(k, tk). k is the index of Co-net objectives and tk is the time when the objective needs to be satisfied. In the case study, /(1, 100) = 900. Eq. (1) must be met for a sequential Co-net with /(k, tk) uð1; j1; tÞ ¼ uð2; j2; tÞ ¼ � � � ¼ uðn; jn; tÞ ¼ /ðk; tkÞ; t ¼ tk ð1Þ 2.2.3. Error Errors were defined in many different ways depending on the context within which they were used (Kao, 1995; Klein, 1997; Najjari & Steiner, 1997). Other terms used to describe the ‘error’ concept include failure (Lopes & Camarinha-Matos, 1995; Najjari & Steiner, 1997; Steininger & Scherrer, 1997; Toguyeni, Craye, & Gentina, 1996), fault (Kao, 1995; Roos et al., 2003; Steininger & Scherrer, 1997), exception (Bruccoleri & Pasek, 2002), flaw (Miceli, Sahraoui, & Godin, 1999), and conflict (Ronsse & Bosschere, 2002), although ‘error’ (Bolchini, Fornaciari, Salice, & Sciuto, 1998; Final products/services Agent n … ing agent and receives information from the preceding agent. Sequential production/service lines have the simplest system topology. The AEPAs are expected to outperform centralized algorithms for complex system topologies because agents can execute distributed algorithms in parallel for parallel activities in a system. For sequential production/service lines, however, it is not clear if distributed algorithms have any advantage over centralized algorithms. It is therefore critical to understand how distributed algorithms perform compared to centralized algo- rithms over sequential production/service lines. The results can also bring insights into the application of distributed algorithms Raw materials ction/service line. wit Bolchini, Pomante, Salice, & Sciuto, 2002; Jeng, 1997; Kanawati et al., 1996; Kao, 1995; Klein, 1997; Najjari & Steiner, 1997; Ronsse & Bosschere, 2002; Steininger & Scherrer, 1997; Svenson & Salo, 2001) is the most popular term. Eq. (2) provides a generic defini- tion for error: 9Eði; j; tÞ; iff uði; j; tÞ is not satisfied; 8i; j; t; uði; j; tÞ#/ðk; tkÞ ð2Þ The notion u(i, j, t) # /(k, tk) indicates that the agent objective is determined by the Co-net objective. This is always true for a sequential Co-net according to Eq. (1). In previous research (Chen & Nof, 2007), algorithms have been developed to detect and predict errors unrelated to Co-net objectives. This research aims at develop- ing AEPAs to prevent errors that are related to Co-net objectives. For instance, suppose agents 1, 2, and 3 in the case study have produced 870, 910, and 800 semi-finished products/services at t = tk = 100, respectively. There are two errors, E(1, 1, 100) and E(3, 1, 100). Both are related to Co-net objective /(1, 100) = 900. 2.2.4. Agent output of conformities C(i, j, t), is the number of cumulative conformities produced by agent i by time t to satisfy objective j, e.g., C(1, 1, 100) = 870. 2.2.5. Agent output of nonconformities N(i, j, t), is the number of cumulative nonconformities produced by agent i by time t in the process of fulfilling objective j, e.g., N(1, 1, 100) = 16. Agent i’s total output by time t in order to satisfy objective j is equal to C(i, j, t) + N(i, j, t). 2.2.6. Agent minimum input I(i, j, t), is the minimal input units for agent i at time t to satisfy objective j. An agent needs one or more inputs to produce prod- ucts/services. The usage of each input to produce one product/ser- vice may vary, e.g., an agent needs 30 capacitors and 20 resistances to produce one circuit board. One input unit is used to produce one product/service. According to this definition, 90 capacitors are three ‘‘input units’’ and 80 resistances are four ‘‘input units.’’ The minimal input units are the minimum of all input units. For in- stance, I(1, 1, 90) = 5 indicates five more products/services can be produced by agent 1 at/after t = 90. 2.2.7. Agent conformability g(i, j), is the probability a product/service is within specifica- tions after being operated by agent i to satisfy objective j, assuming it is within specifications before being operated by agent i. g(i, j) is between zero and one. For instance, g(1, 1) = 0.91 indicates there is a 91% probability that a semi-finished product/service is within specifications after being operated by agent 1. 2.2.8. Identify; detect; predict; prevent An error is identified if the error is claimed (may or may not oc- cur). An error is detected if the error has occurred and is identified. An error is predicted if the error has not occurred (may or may not occur in the future) and is identified. An error is prevented if the error has not occurred, will occur in the future, and is identified. 2.3. Problem definition Traditionally, errors are predicted with reliability theory before production/service starts, or detected at the last agent of a sequen- tial Co-net after they occur. Both are centralized approaches which do not require collaboration between distributed agents. To the 282 X.W. Chen, S.Y. Nof / Expert Systems best of the authors’ knowledge, no work has addressed the limit of the centralized approaches, and the development and imple- mentation of distributed, agent-based approaches in producing multiple products or services. The purpose of AEPAs is to prevent errors. The problem of interest is that given /(k, tk), what are best methods to prevent E(i, j, t) with u(i, j, t), C(i, j, t), N(i, j, t), I(i, j, t), and g(i, j) . 2.4. Related work Error detection has been studied in various operational environ- ments for which detection models, methods, mechanisms, and methodologies were developed. Two difficulties emerge in detect- ing errors: (1) Area dependent. Detection models are applied to specific areas, e.g., concurrent error detection (CED) in computer engineering (Bolchini et al., 1998, 2002; Kanawati et al., 1996; Mitra &McCluskey, 2001), license event reports (LERs) for nuclear power reactors (Svenson & Salo, 2001), machine learning approach in assembly (Lopes et al., 1995), and error detection in database (Klein, 1997); (2) Error dependent. Each detection method is developed for a group of similar errors, e.g., a metric-based technique for design flaw detection (Miceli et al., 1999) and non-intrusive detection of synchronization errors (Ronsse & Bosschere, 2002). Error detection has been intensively studied for assembly (Najjari & Steiner, 1997) and design (Bolchini et al., 1998; Miceli et al., 1999; Mitra & McCluskey, 2001; Ronsse & Bosschere, 2002), and detection methodologies have been studied from vari- ous perspectives (Bolchini et al., 2002; Kanawati et al., 1996; Roos et al., 2003; Steininger & Scherrer, 1997; Svenson & Salo, 2001). Eq. (3) is the common centralized error detection rule (DR). Similar rules were developed in Chen and Nof (2007) to detect errors in the process of producing single product/service. Suppose agent n is the last agent of a sequential Co-net. An error is detected at n if Eq. (3) is met. u(n, j, tk) is equal to /(k, tk). The DR is used to de- tect errors after they occur. Limited research has been conducted to develop generic detection models and algorithms for production/ service 9Eðn; j; tkÞ if Cðn; j; tkÞ < uðn; j; tkÞ ð3Þ Error propagation has been studied in software engineering (Abdel- moez et al., 2004) and manufacturing (Kelly, 2004; Yi, Haralick, & Shapiro, 1994). Recently, a detection model addressing error and conflict propagation has been developed (Chen & Nof, 2010). Error prediction has been given more attention in the preparation of production/service, including design, procurement, and testing, than in other stages of production/service where only error detec- tion has been applied. Most research has been focused on prediction errors (Mir, Mayer, & Fortin, 2002). Error prediction is more difficult than error detection because prediction requires the understanding of an agent or a system whereas detection is focused on the output. Eq. (4) is the common centralized error prediction rule (PR) based on reliability theory. Suppose there are n agents in a sequential Co-net. An error is predicted at t = 0 if Eq. (4) is met 9Eðin; j; tkÞ if min n i¼1 Iði; j;0Þ � Yn i gði; jÞ ( ) < uðin; j; tkÞ ð4Þ The decision rules developed in Chen and Nof (2007) can predict er- rors in the process of producing single product/service whereas the PR predicts errors in the process of producing multiple products/ services. The PR requires information from all agents, which is dif- ficult when a Co-net has many agents. The result of PR may not be h Applications 39 (2012) 280–287 reliable because it is used at the beginning of production/service with information that may not be accurate. There is a need to develop algorithms that can reliably predict and prevent errors. describes the newly developed decision rules. The next section val- þ Cði; j; tÞ < uði; j; tkÞ; t < tk ð11Þ max½Cðx;j;tÞþNðx;j;tÞ ;gðx; jÞ�; optimistic 9Eði; j; tkÞ if min þCði� 1; j; tÞ � Nði; j; tÞ � Cði; j; tÞ � gði; jÞ þ Cði; j; tÞ < uði; j; tkÞ; t < tk ð16Þ s wit idates these rules and identifies the best rules used in AEPAs to prevent errors. 3.1. Local error prevention An agent can identify errors with local information, i.e., the information about the agent, according to three decision rules, R1, R2, and R3. 3.1.1. Nominal rule R1 9Eði; j; tkÞ if Iði; j; tÞ � gði; jÞ þ Cði; j; tÞ < uði; j; tkÞ; t < tk ð5Þ An error is identified if Eq. (5) is met. The sum of I(i, j, t) � g(i, j) and C(i, j, t) must be greater than or equal to the total number of prod- ucts/services needed. Otherwise an error will occur. 3.1.2. Conservative rule R2 9Eði; j; tkÞ if Iði; j; tÞ �min Cði; j; tÞCði; j; tÞ þ Nði; j; tÞ ;gði; jÞ � � þ Cði; j; tÞ < uði; j; tkÞ; t < tk ð6Þ An error is identified if Eq. (6) is met, indicating the decision maker is conservative. R2 considers the minimum of agent conformability g(i, j) and current agent performance C(i, j, t)/[C(i, j, t) + N(i, j, t)]. 3.1.3. Optimistic rule R3 9Eði; j; tkÞ if Iði; j; tÞ �max Cði; j; tÞCði; j; tÞ þ Nði; j; tÞ ;gði; jÞ � � þ Cði; j; tÞ < uði; j; tkÞ; t < tk ð7Þ An error is identified if Eq. (7) is met, indicating the decision maker is optimistic. The maximum of agent conformability and current performance are used in Eq. (7). 3.2. Propagation error prevention R4, R5, and R6 defined in Eqs. (8)–(10), respectively, are devel- oped to identify errors propagated from the preceding agent with both agents’ information. Eq. (11) combines R4, R5, and R6. 3.2.1. Nominal rule R4 9Eði; j; tkÞ if Iði� 1; j; tÞ � gði� 1; jÞ þ Cði� 1; j; tÞ �Nði; j; tÞ � Cði; j; tÞ � � � gði; jÞ þ Cði; j; tÞ < uði; j; tkÞ; t < tk ð8Þ 3.2.2. Conservative rule R5 9Eði; j; tkÞ if Iði� 1; j; tÞ �min Cði�1;j;tÞCði�1;j;tÞþNði�1;j;tÞ ;gði� 1; jÞ h i þCði� 1; j; tÞ � Nði; j; tÞ � Cði; j; tÞ 8>>< >>: 9>>= >>; Cði; j; tÞ� � 3. AEPAs The focus of this research is to develop AEPAs for a sequential Co-net with multiple agents. An algorithm has three steps: (Step 1) Determine decision rules; (Step 2) Obtain information; (Step 3) Apply decision rules and make decisions. Each agent follows these steps to identify and prevent errors. The rest of this section X.W. Chen, S.Y. Nof / Expert System �min Cði; j; tÞ þ Nði; j; tÞ ;gði; jÞ þ Cði; j; tÞ < uði; j; tkÞ; t < tk ð9Þ Table 3 summarizes the error detection algorithm, error prediction algorithm, and AEPAs. There are three main differences between them: (1) An agent can decide whether it executes an AEPA and at what time it executes an AEPA. An AEPA employs a decision rule. Multiple AEPAs can be executed simultaneously by multiple agents. The error detection and prediction algo- rithms are centralized approaches which do not require col- þ Cði; j; tÞ < uði; j; tkÞ; t < tk ð14Þ 3.3.2. Conservative rule R8 9Eði; j; tkÞ if min Iði; j; tÞ; Iði� 1; j; tÞ � Ai�1 þCði� 1; j; tÞ � Nði; j; tÞ � Cði; j; tÞ ( ) � Ai þ Cði; j; tÞ < uði; j; tkÞ; t < tk ð15Þ 3.3.3. Optimistic rule R9 9Eði; j; tkÞ if min Iði; j; tÞ; Iði� 1; j; tÞ � Ai�1 þCði� 1; j; tÞ � Nði; j; tÞ � Cði; j; tÞ � � � Ai x ¼ i� 1 or i ð13Þ 3.3. Integrated error prevention Each agent can use information available locally and informa- tion obtained from other agents to identify errors. R7, R8, and R9 defined in Eqs. (14)–(16), respectively, integrate R1 and R4, R2 and R5, and R3 and R6, respectively. 3.3.1. Nominal rule R7 Iði; j; tÞ; Iði� 1; j; tÞ � gði� 1; jÞ� � Ai�1 and Ai are attitude coefficients of agents i � 1 and i, respec- tively, and are defined in Eq. (13): Ax ¼ gðx; jÞ; nominal min½ Cðx;j;tÞCðx;j;tÞþNðx;j;tÞ ;gðx; jÞ�; conservative Cðx;j;tÞ 8>>< >>: ð12Þ 3.2.3. Optimistic rule R6 9Eði; j; tkÞ if Iði� 1; j; tÞ �max Cði�1;j;tÞCði�1;j;tÞþNði�1;j;tÞ ;gði� 1; jÞ h i þCði� 1; j; tÞ � Nði; j; tÞ � Cði; j; tÞ 8>< >: 9>= >; �max Cði; j; tÞ Cði; j; tÞ þ Nði; j; tÞ ;gði; jÞ � � þ Cði; j; tÞ < uði; j; tkÞ; t < tk ð10Þ 3.2.4. Combination of R4, R5, and R6 9Eði; j; tkÞ if Iði� 1; j; tÞ � Ai�1 þ Cði� 1; j; tÞ �Nði; j; tÞ � Cði; j; tÞ " # � Ai h Applications 39 (2012) 280–287 283 laboration between distributed agents. (2) The error detection algorithm detects errors after they occur. It cannot prevent errors. AEPAs are executed before an 400 errors by using R1 at t2 = 50 min. The preventability is detected (DR) divided by the total number of errors. Suppose and provide guidelines on how to apply AEPAs. wit in the above example, agent 1 identifies 500 errors with R1 at t2 = 50 min. The reliability is 400/500=0.8. There are four propositions regarding preventability: Proposition 1A. The preventability obtained by the error detection algorithm is zero because it does not prevent errors before they occur. Proposition 2A. The preventability obtained by the error prediction algorithm is expected to be lower than the preventability obtained by certain AEPAs because the prediction algorithm uses inaccurate information whereas an AEPA uses accurate information. Proposition 3A. The preventability obtained by AEPAs that employ conservative rules R2, R5, and R8 is expected to be higher than or equal to the preventability obtained by AEPAs that employ nominal rules R1, R4, and R7, respectively. The preventability obtained by AEPAs that employ nominal rules R1, R4, and R7 is expected to be higher than or equal to the preventability obtained by AEPAs that employ optimis- tic rules R3, R6, and R9, respectively. This is because the conservative rules identify more errors than nominal rules, which identify more errors than optimistic rules. The proof is as follows: min Cði; j; tÞ Cði; j; tÞ þ Nði; j; tÞ ;gði; jÞ � � 6 gði; jÞ 6 max Cði; j; tÞ Cði; j; tÞ þ Nði; j; tÞ ;gði; jÞ � � Iði; j; tÞ �min Cði; j; tÞ Cði; j; tÞ þ Nði; j; tÞ ;gði; jÞ � � þ Cði; j; tÞ 6 Iði; j; tÞ � gði; jÞ þ Cði; j; tÞ 6 Iði; j; tÞ �max½ Cði; j; tÞ Cði; j; tÞ þ Nði; j; tÞ ;gði; jÞ� þ Cði; j; tÞ ) In terms of the number of errors identified; 400/800=0.5. (2) Reliability (Raghavan, Shakeri, & Pattipati, 1999; Tu, Pattipat- i, Deb, & Malepati, 2003) is the ratio of the number of errors prevented divided by the number of errors identified (R1– R9) or predicted (PR), or the ratio of the number of errors objective needs to be satisfied. An agent can identify errors before they occur with AEPAs. (3) The error prediction algorithm requires information from all agents. That information is difficult to obtain, especially for a Co-net with many distributed agents. The algorithm is exe- cuted before production/service starts with information that may not be accurate. AEPAs are executed by distributed agents when the production/service is running. Each agent has the accurate information to execute AEPAs. 3.4. Performance measures and propositions Two performance measures are defined to compare between AEPAs and the detection and prediction algorithms. Note that the time required to execute AEPAs or the detection and prediction algorithms is relatively small and is negligible. (1) Preventability is the ratio of the number of errors prevented divided by the total number of errors. Suppose the produc- tion/service described in the case study repeats for 1000 times and 800 errors occur at t = 100 min. Agent 1 prevents 284 X.W. Chen, S.Y. Nof / Expert Systems R2P R1P R3: Similarly; it can be proven that R5P R4P R6 and R8P R7P R9 4. Validation and discussion 4.1. Design of experiments Simulation experiments with AutoMod (AutoMod, Version 11.1, 1998–2003) for the sequential Co-net described in the case study are conducted to compare the error detection algorithm, error pre- diction algorithm, and AEPAs. The objectives of experiments are to validate AEPAs and identify best algorithms according to perfor- mance measures. In experiments, there are three independent vari- ables, agent, decision rule, and time, and two dependent variables, preventability and reliability: (1) Independent variable agent has three levels: agent 1, agent during collaboration. This is possible only with distributed, agent- based algorithms such as AEPAs. Experiments are conducted to validate the nine propositions Proposition 2C. It is expected that compared to preventing errors by a single agent, the preventability increases if multiple agents collabo- rate to prevent errors, i.e., multiple agents execute AEPAs simulta- neously or at different times, because more information is utilized Proposition 3B. It is expected that the reliability obtained by AEPAs that employ optimistic rules R3, R6, and R9 is higher than the reliabil- ity obtained by AEPAs that employ nominal rules R1, R4, and R7, respectively. Similarly, it is also expected that the reliability obtained by AEPAs that employ nominal rules R1, R4, and R7 is higher than the reliability obtained by AEPAs that employ conservative rules R2, R5, and R8. This is because of the trade-off between reliability and preventability. There are two propositions regarding AEPAs: Proposition 1C. It is expected that the later an AEPA is executed by an agent, the higher preventability and reliability can be obtained. This is because the later an AEPA is executed, the less change is expected before the objective needs to be satisfied. Proposition 4A. The preventability obtained by AEPAs that employ integrated rules R7, R8, and R9 is higher than or equal to the prevent- ability obtained by AEPAs that employ local rules R1, R2, and R3, respectively, and the preventability obtained by AEPAs that employ propagation rules R4, R5, and R6, respectively. This is because R7, R8, and R9 can identify more errors than R1, R2, and R3, respectively, and R4, R5, and R6, respectively. The proof is as follows: min Iði; j; tÞ; Iði� 1; j; tÞ � gði� 1; jÞþ Cði� 1; j; tÞ � Nði; j; tÞ � Cði; j; tÞ � � 6 Iði; j; tÞ ) In terms of the number of errors identified; R7P R1: Similarly; it can be proven that R7P R4; R8P R2; R8P R5; R9P R3; and R9P R6 There are three propositions regarding reliability: Proposition 1B. The reliability obtained by the error detection algorithm is one because all errors are detected. Proposition 2B. The reliability obtained by the error prediction algo- rithm is expected to be lower than the reliability obtained by certain AEPAs because the prediction algorithm is executed before produc- tion/service starts with inaccurate information. h Applications 39 (2012) 280–287 2, and agent 3. They operate on products/services sequen- tially and each of them can execute AEPAs. I(i, 1, 0) is an integer and is assumed to follow a uniform distribu- tion U(900, 1000) with a mean of 950 units. The mean is available information to the prediction algorithm at the beginning of pro- duction/service. The exact number of input units for an agent is available to the agent and the succeeding agent; this information is not available to the prediction algorithm. To study the influence of different AEPA execution times (earlier or later) on preventabil- ity and reliability, the speed of production/service is assumed to be sufficiently large to process all input units within 100 min and is the same for all three agents. There are differences between AEPAs and traditional hypothesis testing such as Wald’s sequential testing (Garrison & Hickey, 1984; Wald, 1947). First, the purposes of the two approaches are differ- ent. Wald’s sequential testing tests statistically if an agent per- the prediction algorithm is lower than the preventability Table 3 Summary of error detection, prediction, and prevention algorithms. Algorithm step Description Error detection algorithm Error prediction algorithm AEPA 1 Determine decision rules N/A N/A Choose from R1 to R9 2 Obtain information Information of the last agent in a sequential Co-net is needed Inaccurate information of all agents obtained before production/ service starts Accurate information obtained by distributed agents 3 Apply decision Detection rule (DR) (Eq. (3)) Prediction rule (PR) (Eq. (4)) One of nine rules (R1– X.W. Chen, S.Y. Nof / Expert Systems with Applications 39 (2012) 280–287 285 (2) Independent variable decision rule has nine levels: R1, R2, R3, R4, R5, R6, R7, R8, and R9. (3) Independent variable time has three levels: 1, 2, and 3, i.e., t1 = 10 min, t2 = 50 min, and t3 = 90 min. (4) Dependent variable preventability is a performance measure and a real number between zero and one. (5) Dependent variable reliability is a performance measure and a real number between zero and one. There are 3 � 3 + 2 � 9 � 3 = 63 combinations of three indepen- dent variables because agent 1 is the first agent in the Co-net and cannot execute AEPAs that employ decision rules R4 through R9. Each experiment run provides 63 binary values each of which is the outcome of one combination. For instance, if agent 2 executes an AEPA that employs R7 and identifies an error at t3 = 90 min; the experiment result for the combination is ‘‘1’’. The result is ‘‘0’’ if no error is identified. In addition to the 63 values, each experiment run provides two more binary values. One of them indicates whether an error is de- tected with the detection algorithm that employs detection rule (DR) at t = 100 min. The other indicates whether an error is pre- dicted with the prediction algorithm that employs prediction rule (PR) at t = 0. The value is ‘‘1’’ if an error is detected or predicted; the value is ‘‘0 if no error is detected or predicted. The agent conformability g(i, 1) is assumed to follow a uniform distribution U(0.98, 1) with a mean of 0.99. g(i, 1) is a constant in one experiment run but varies across different runs. According to reliability theory, I(i, 1, 0) is at least 900 to satisfy the objective / rules R9) (1, 000) = 900 if g(i, 1) = 1. In the worst case when g(i, 1) = 0.98, I(1, 1, 0) = 900/0.983 � 957 to satisfy the objective while I(2, 1, 0) and I(3, 1, 0) can be less than 957. To include sufficient variations, Table 4 Preventability obtained by AEPAs. Agent Time AEPA Local Pro R1 R2 R3 R4 1 1 .21 .62 .15 2 .21 .92 .21 3 .22 .96 .22 2 1 .22 .69 .16 .41 2 .22 .95 .22 .40 3 .23 .98 .23 .40 3 1 .21 .68 .16 .39 2 .21 .94 .21 .41 3 .22 .97 .22 .42 obtained by all AEPAs. Although the prediction algorithm has information of all agents, its performance is worse than any AEPA in terms of preventability because of the inaccu- rate information about the input units for each agent. (3) Data in Table 4 validate both Propositions 3A and 4A. The reliability obtained by the detection algorithm is one. The reliability obtained by the prediction algorithm is 0.67. Table 5 summaries the reliability obtained by AEPAs that employ R1 pagation Integrated R5 R6 R7 R8 R9 N/A .90 .31 .57 .92 .43 1.00 .40 .57 1.00 .57 1.00 .40 .58 1.00 .58 .93 .30 .55 .93 .41 forms as expected whereas AEPAs determine if the objective of a group of agents can be satisfied. These two approaches are comple- mentary and can be used together, i.e., AEPAs for global objectives and Wald’s sequential testing for local performance. Second, AEPAs use both local information and information from collaborative agents whereas Wald’s sequential testing uses only local informa- tion. Third, AEPAs are executed before the time an objective needs to be satisfied to prevent errors. Wald’s sequential testing detects errors at an agent after they occur. 4.2. Experiments results and discussion One thousand experiment runs are conducted and 439 errors occur at t = 100 min. The detection algorithm detects all 439 errors after they occur. The preventability is zero. The prediction algo- rithm predicts three errors and prevents two errors. The prevent- ability is 0.005. Table 4 summaries the preventability obtained by AEPAs that employ R1 through R9 and are executed by three agents at three different times. These results are used to validate the four propositions about preventability: (1) Proposition 1A is validated because the detection algorithm does not prevent any errors. (2) Proposition 2A is validated. The preventability obtained by 1.00 .41 .57 1.00 .56 1.00 .42 .58 1.00 .58 through R9. The results validate the three propositions about reliability: (1) Proposition 1B is validated because the detection algorithm detects all errors that have occurred. (2) The reliability obtained by the prediction algorithm is lower than the reliability obtained by AEPAs that employ nominal and optimistic rules, and higher than the reliability obtained by AEPAs that employ conservative rules. Proposition 2B is validated. (3) Proposition 3B is validated. The prediction algorithm and AEPAs can be used to prevent er- rors. The AEPAs that employ nominal and optimistic rules outper- form the prediction algorithm in terms of preventability and reliability. In terms of the AEPA execution time, it can be observed from Tables 4 and 5 that in most cases the later an AEPA is exe- cuted, the higher preventability and reliability can be obtained. Proposition 1C is not always true, however. One advantage of AEPAs is that multiple agents can collaborate to prevent errors. Tables 6 and 7 show the preventability and reli- ability, respectively, obtained by AEPAs executed by two agents collaboratively, i.e., two agents execute the same AEPA at the same time. The preventability in Table 6 is higher than the correspond- ing preventability in Table 4, which validates Proposition 2C. For instance, when agents 2 and 3 collaborate to identify errors with R6 at t2, the preventability is 0.72. When agents 2 and 3 identify errors independently with R6 at t2, the preventability is 0.40 and 0.41, respectively. The collaboration between agents improves the performance of Compared to the detection algorithm, AEPAs have higher pre- ventability. The reliability obtained by AEPAs that employ nominal and optimistic rules is close to the reliability obtained by the detec- tion algorithm (Table 5). Compared to the prediction algorithm, the AEPAs that employ nominal and optimistic rules have better per- formance in terms of both preventability and reliability. When two agents execute the same AEPA simultaneously, the prevent- ability increases (Tables 4 and 6) and the reliability increases or stays almost the same (Tables 5 and 7). The AEPA that employs R7 and is executed by agents 2 and 3 is recommended for automated error prevention because of its high preventability and reliability. When an AEPA is executed later, both the preventability and reliability increase in most cases. It is there- fore recommended that AEPAs are executed at a time that is close to the time the Co-net objective needs to be satisfied, as long as there is sufficient time for error prevention. Future research can fo- cus on three important directions: (1) Expand the distributed AEPAs and apply them for more com- plex system topologies, e.g., parallel networks (Chen & Nof, 2010) where multiple agents provide products/services to multiple agents. (2) Expand the size of the network and incorporate various con- ditions for different agents. There can be hundred or even thousand agents in a network and different agents may have different conformabilities. The performance of AEPAs needs to be studied for large networks. 286 X.W. Chen, S.Y. Nof / Expert Systems wit AEPAs. The AEPA that employs R7 and is executed by agents 2 and 3 has both high preventability and reliability. Table 8 compares this AEPA with the detection and prediction algorithms. Table 5 Reliability obtained by AEPAs. Agent Time AEPA Local Propagation Integrated R1 R2 R3 R4 R5 R6 R7 R8 R9 1 1 1.00 .48 1.00 N/A 2 1.00 .45 1.00 3 1.00 .45 1.00 2 1 1.00 .50 1.00 .98 .48 .99 .99 .48 .99 2 1.00 .46 1.00 .99 .44 .99 1.00 .44 1.00 3 .99 .45 .99 1.00 .44 1.00 1.00 .44 1.00 3 1 .93 .51 .96 .90 .49 .94 .90 .49 .94 2 .95 .46 .96 .95 .44 .95 .94 .44 .95 3 .97 .45 .97 .95 .44 .95 .96 .44 .96 Table 6 Preventability obtained by AEPAs executed by two agents. Agent Time AEPA Local Propagation Integrated R1 R2 R3 R4 R5 R6 R7 R8 R9 1 and 2 1 .39 .89 .30 N/A 2 .40 1.00 .39 3 .42 1.00 .41 1 and 3 1 .38 .91 .29 2 .39 1.00 .39 3 .41 1.00 .41 2 and 3 1 .40 .92 .31 .72 .99 .55 .84 .99 .64 2 .41 1.00 .40 .73 1.00 .72 .84 1.00 .84 3 .43 1.00 .43 .73 1.00 .73 .85 1.00 .85 5. Conclusion and future research The traditional centralized error detection algorithm detects er- rors after the production/service completes. It cannot prevent er- rors. The traditional centralized error prediction algorithm predicts errors with inaccurate information from all agents before the production/service starts. The newly developed AEPAs allow multiple distributed agents to identify errors simultaneously with accurate information while the production/service is running. Table 7 Reliability obtained by AEPAs executed by two agents. Agent Time AEPA Local Propagation Integrated R1 R2 R3 R4 R5 R6 R7 R8 R9 1 and 2 1 1.00 .47 1.00 N/A 2 1.00 .44 1.00 3 .99 .44 .99 1 and 3 1 .96 .48 .98 2 .97 .44 .98 3 .98 .44 .98 2 and 3 1 .96 .49 .98 .94 .47 .96 .93 .47 .96 2 .97 .44 .98 .97 .44 .97 .96 .44 .96 3 .98 .44 .98 .97 .44 .97 .97 .44 .97 Table 8 Summary comparison of error detection, prediction, and prevention algorithms. Comparison Error detection algorithm Error prediction algorithm AEPA Decision Rule DR (Eq. (3)) PR (Eq. (4)) R7 (Eq. (14)) Execution Centralized Centralized Decentralized by agents 2 and 3 simultaneously Preventability 0 0.005 P0.84 Reliability 1 0.67 P0.93 h Applications 39 (2012) 280–287 (3) Identify the optimal timing of error prevention. The perfor- mance of an AEPA improves if it is executed later. There may not be sufficient time for error prevention, however, if an AEPA is executed too late. There is a tradeoff between the AEPA performance and error prevention time. The optimal timing of error prevention can be determined by studying their relationships. References Abdelmoez, W., Nassar, D. M., Shereshevsky, M., Gradetsky, N., Gunnalan, R., Ammar, H. H., et al. (2004). Error propagation in software architectures. In Proceedings of 10th international symposium on software metrics (pp. 384–393). AutoMod, Version 11.1 (1998–2003). Brooks Automation, Inc. Bolchini, C., Fornaciari, W., Salice, F., & Sciuto, D. (1998). Concurrent error detection at architectural level. In Proceedings of the 11th international symposium on system synthesis (pp. 72–75). Bolchini, C., Pomante, L., Salice, F., & Sciuto, D. (2002). Reliability properties assessment at system level: a co-design framework. Journal of Electronic Testing, 18, 351–356. Bruccoleri, M., & Pasek, Z. J. (2002). Operational issues in reconfigurable manufacturing systems: exception handling. In Proceedings of the 5th biannual world automation congress. Chen, X. W., & Nof, S. Y. (2007). Error detection and prediction algorithms: Application in robotics. Journal of Intelligent and Robotic Systems, 48, 225–252. Chen, X. W., & Nof, S. Y. (2009). Automating errors and conflicts prognostics and prevention. In Springer Handbook of automation (pp. 503–525). Heidelberg, Germany: Springer Publishers. Chen, X. W., & Nof, S. Y. (2010). A decentralized conflict and error detection and Lara, M. A., & Nof, S. Y. (2003). Computer-supported conflict resolution for collaborative facility designers. International Journal of Production Research, 41, 207–234. Lopes, L. S., & Camarinha-Matos, L. M. (1995). A machine learning approach to error detection and recovery in assembly, In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems 95, ‘Human Robot Interaction and Cooperative Robots’ (Vol. 3, pp. 197–203). Miceli, T., Sahraoui, H. A., & Godin, R. (1999). A metric based technique for design flaws detection and correction. In 14th IEEE international conference on automated software engineering (pp. 307–310). Mir, Y. A., Mayer, J. R. R., & Fortin, C. (2002). Tool path error prediction of a five-axis machine tool with geometric errors. Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, 216, 697–712. Mitra, S., & McCluskey, E. J. (2001). Diversity techniques for concurrent error detection, In Proceedings of the IEEE 2nd international symposium on quality electronic design (pp. 249–250). Najjari, H., & Steiner, S. J. (1997). Integrated sensor-based control system for a flexible assembly. Mechatronics, 7, 231–262. Nof, S. Y. (2003). Design of effective e-work: Review of models, tools, and emerging challenges. Production Planning & Control, 14, 681–703. Nof, S. Y., & Chen, J. (2003). Assembly and disassembly: An overview and framework for cooperation requirement planning with conflict resolution. Journal of Intelligent & Robotic Systems, 37, 307–320. Raghavan, V., Shakeri, M., & Pattipati, K. (1999). Test sequencing algorithms with unreliable tests. IEEE Transactions on Systems, Man, and Cybernetics Part A: Systems and Humans, 29, 347–357. Ronsse, M., & Bosschere, K. (2002). Non-intrusive detection of synchronization errors using execution replay. Automated Software Engineering, 9, 95–121. Roos, N., ten Teije, A., & Witteveen, C. (2003). A protocol for multi-agent diagnosis with spatially distributed knowledge, In Proceedings of the AAMAS (pp. 655– 661). X.W. Chen, S.Y. Nof / Expert Systems with Applications 39 (2012) 280–287 287 prediction model. International Journal of Production Research, 48, 4829–4843. Garrison, D. R., & Hickey, J. J. (1984). Wald sequential sampling for attribute inspection. Journal of Quality Technology, 16. Huang, C. Y., & Nof, S. Y. (1999). Enterprise agility: A view from the PRISM lab. International Journal of Agile Management Systems, 4, 51–59. Huang, C. Y., & Nof, S. Y. (2000). Formation of autonomous agent networks for manufacturing systems. International Journal of Production Research, 38, 607–624. Huang, C. Y., Ceroni, J. A., & Nof, S. Y. (2000). Agility of networked enterprises- parallelism, error recovery, and conflict resolution. Computers in Industry, 42, 275–287. Jeng, M. D. (1997). Petri nets for modeling automated manufacturing systems with error recovery. IEEE Transactions on Robotics and Automation, 13, 752–760. Kanawati, G. A., Nair, V. S. S., Krishnamurthy, N., & Abraham, J. A. (1996). Evaluation of integrated system-level checks for on-line error detection. In Proceedings of the IEEE international, computer performance and dependability symposium (pp. 292–301). Kao, J. F. (1995). Optimal recovery strategies for manufacturing systems. European Journal of Operational Research, 80, 252–263. Kelly, A. (2004). Linearized error propagation in odometry. International Journal of Robotics Research, 23, 179–218. Klein, B. D. (1997). How do actuaries use data containing errors?: Models of error detection and error correction. Information Resources Management Journal, 10, 27–36. Steininger, A., & Scherrer, C. (1997). On finding an optimal combination of error detection mechanisms based on results of fault injection experiments, In Digest of papers, twenty-seventh annual international symposium on fault-tolerant computing, FTCS-27 (pp. 238–247). Svenson, O., & Salo, I. (2001). Latency and mode of error detection in a process industry. Reliability Engineering & System Safety, 73, 83–90. Toguyeni, K. A., Craye, E., & Gentina, J. C. (1996). Framework to design a distributed diagnosis in FMS. In Proceedings of the IEEE international conference on systems, man and cybernetics (Vol. 4, pp. 2774–2779). Tu, F., Pattipati, K. R., Deb, S., & Malepati, V. N. (2003). Computationally efficient algorithms for multiple fault diagnosis in large graph-based systems. IEEE Transactions on Systems, Man, and Cybernetics Part A: Systems and Humans, 33, 73–85. Wald, A. (1947). Sequential analysis. New York, NY: John Wiley & Sons. Yang, C. L., Chen, X., & Nof, S. Y. (2005). Design of a production conflict and error detection model with active protocols and agents. In Proceedings of the 118th international conference on production research. Yi, S., Haralick, R. M., & Shapiro, L. G. (1994). Error propagation in machine vision. Machine Vision and Applications, 7, 93–114. Agent-based error prevention algorithms 1 Introduction 2 Problem definition and related work 2.1 Research assumption and case study 2.2 Research definitions 2.2.1 Agent 2.2.2 Co-net 2.2.3 Error 2.2.4 Agent output of conformities 2.2.5 Agent output of nonconformities 2.2.6 Agent minimum input 2.2.7 Agent conformability 2.2.8 Identify; detect; predict; prevent 2.3 Problem definition 2.4 Related work 3 AEPAs 3.1 Local error prevention 3.1.1 Nominal rule R1 3.1.2 Conservative rule R2 3.1.3 Optimistic rule R3 3.2 Propagation error prevention 3.2.1 Nominal rule R4 3.2.2 Conservative rule R5 3.2.3 Optimistic rule R6 3.2.4 Combination of R4, R5, and R6 3.3 Integrated error prevention 3.3.1 Nominal rule R7 3.3.2 Conservative rule R8 3.3.3 Optimistic rule R9 3.4 Performance measures and propositions 4 Validation and discussion 4.1 Design of experiments 4.2 Experiments results and discussion 5 Conclusion and future research References


Comments

Copyright © 2025 UPDOCS Inc.