ČESKÉ VYSOKÉ UČENÍ TECHNICKÉ V PRAZE FAKULTA ELEKTROTECHNICKÁ
|
|
- Václav Pravec
- před 5 lety
- Počet zobrazení:
Transkript
1 ČESKÉ VYSOKÉ UČENÍ TECHNICKÉ V PRAZE FAKULTA ELEKTROTECHNICKÁ Artificial Intelligence Based Intelligent Thermostat Inteligentní termostat využívající metod umělé inteligence DIPLOMOVÁ PRÁCE 2018
2
3 MASTER S THESIS ASSIGNMENT I. Personal and study details Student's name: Procházka Martin Faculty / Institute: Faculty of Electrical Engineering Department / Institute: Department of Control Engineering Study program: Cybernetics and Robotics Branch of study: Cybernetics and Robotics II. Master s thesis details Master s thesis title in English: Artifical intelligence based intelligent thermostat Master s thesis title in Czech: Inteligentní termostat využívající metod umělé inteligence Personal ID number: Guidelines: 1. Study the reinforcement learning method used in Artificial Intelligence, research the model-less optimal control design methods. 2. Validate the control performance of intelligent, self-learnining thermostats based on Q-learning approach using simulation model of a building with various properties (thermal innertia, insulation etc.), evaluate applicability of proposed methods for commercial thermostat applications, 3. Estimate the benefits resulting from improved energy efficiency of domestic heating due to improved controller tuning. Bibliography / sources: [1] R. S. Sutton a A. G. Barto, Reinforcement Learning, The MIT Press, 1998 (několik reedic). [2] M. Wiering, M. van Otterlo (Eds)., Reinforcement Learning State-of-the-Art, Springer, [3] C. E. Rasmussen, C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006 Name and workplace of master s thesis supervisor: prof. Ing. Vladimír Havlena, CSc., Department of Control Engineering, FEE Name and workplace of second master s thesis supervisor or consultant: Date of master s thesis assignment: Deadline for master's thesis submission: Assignment valid until: prof. Ing. Vladimír Havlena, CSc. Supervisor s signature III. Assignment receipt prof. Ing. Michael Šebek, DrSc. Head of department s signature prof. Ing. Pavel Ripka, CSc. Dean s signature The student acknowledges that the master s thesis is an individual work. The student must produce his thesis without the assistance of others, with the exception of provided consultations. Within the master s thesis, the author must state the names of consultants and include a list of references.. Date of assignment receipt Student s signature CVUT-CZ-ZDP ČVUT v Praze, Design: ČVUT v Praze, VIC
4
5 Declaration I hereby declare that the presented thesis was developed independently and that I have stated all information sources in accordance with the methodological instructions on the compliance of ethical principles in the preparation of university graduate theses. In Prague on May 25, Signature iii
6 Abstract This thesis utilizes reinforcement learning, particularly Q-learning, to develop a self-learning algorithm for an intelligent thermostat. The algorithm is able to find an optimal controller solely based on provided data. The data available for learning include room temperature, reference temperature, outside temperature, and applied input (heat flow generated by the boiler). The research part of this work explains in depth the principle of Q-learning and shows on several examples the learning process itself. The advantage of using Q-learning is its capability to incorporate a priori known information. In our case, it can be shown that the Q-function learned by the algorithm will be quadratic and positive definite, which substantially reduces the required amount of training data. The Q-learning approach is also able to deal with constraints introduced by the boiler if merged with MPC. The final chapter includes several experiments verifying the three major developed techniques (controllers comparison, optimal controller generation and merging Q-learning and MPC) on a realistic thermal model. Keywords: Control theory; optimal control; adaptive control; MPC; reinforcement learning, Q-learning iv
7 Abstrakt Tato práce používá zpětnovazební učení, konkrétně Q-learning, k vytvoření algoritmu pro inteligentní termostat. Algoritmus je schopen nalézt optimální regulátor pouze na základě poskytnutých dat. Data, která jsou k dispozici pro učení, zahrnují pokojovou teplotu, referenční teplotu, venkovní teplotu a aplikovaný vstup (tepelný tok generovaný kotlem). Výzkumná část této práce podrobně vysvětluje princip Q-learningu a ukazuje na několika příkladech samotný proces učení. Výhodou použití Q-learningu je jeho schopnost využít předem známé informace. V našem případě lze dokázat, že Q-funkce nalezená algoritmem bude kvadratická a pozitivně definitní, což podstatně snižuje požadované množství trénovacích dat. Q-learning je také schopen se vyrovnat omezeními způsobenými kotlem, pokud je sloučen s prediktivním řízením (MPC). Závěrečná kapitola obsahuje několik experimentů ověřujících tři hlavní vyvinuté techniky (porovnání reglátorů, generování optimálních regulátorů a sloučení Q-learningu a MPC) na realistickém modelu. Klíčová slova: Teorie řízení, optimální řízení, adaptivní řízení, prediktivní řízení (MPC), zpětnovazební učení, Q-learning v
8
9 Acknowledgments I would like to thank my supervisor prof. Ing. Vladimír Havlena, CSc. for his guidance, support and constructive criticism during consultations. I would also like to thank all the members of staff at Honeywell Prague Laboratory and everyone who has supported me during my work on this thesis. vii
10 Contents Contents 1 Introduction 1 2 Research Reinforcement Learning Markov Decision Processes Bellman Optimality Equation Dynamic Programming Policy Evaluation Q-Learning Q-Function Identification Learning Policy Iteration Value Iteration Problems with Policy and Value Iteration Relation with Algebraic Riccati Equation Q-Learning for Stochastic Systems Model Predictive Control (MPC) MPC Without Constraints Constraints on the Input Vector Quadratic Programming Receding Horizon Problem Description Controlled System Demands on the Control Quality Constraints Introduced by the Boiler Thermal Model Description Capabilities and Settings Solution Major Derived Techniques Controllers Comparison Multi-Component Reward Optimal Controller Merging Q-Learning and MPC Partial Problems Solutions viii
11 Contents Policy Choosing During Off-Policy Learning Q-Function Evaluation Tests Q-Learning Motivation Q-Learning Versus Lyapunov Equation Biased Versus Unbiased Estimator Constrained Control Motivation Additional Requirements on the Learning Policy Results Controllers Comparison Optimal Controller Merging Q-Learning and MPC Conclusion Unsolved Problems and Other Future Improvements References 53 ix
12
13 Introduction 1 Introduction Modeling dynamical systems using the first principles for later controller design is long and tedious work. First, a set of differential and algebraic equations have to be created with respect to physical laws describing the dynamical system in question. Those equations include loads of constants that need to be estimated based on data measured on the real system. This is where the main complication comes. Because of transportation delays and long time constants of some systems, it is difficult to obtain sufficient amount of data in a reasonable amount of time to accurately estimate constants in the model. This problem combined with many others resulted in the search for an alternative approach. The first significant breakthrough was achieved by using adaptive controllers, which could be described as dynamic systems with on-line parameter estimation. Adaptive control was first used in the 1950 s to design autopilots for high-performance aircraft. Since then, the theory was found useful in lots of other fields. Self-Tuning Controllers (STC) as a part of the adaptive control theory were found to be a suitable substitution for the modeling of dynamic systems. STCs simultaneously identify parameters and design controllers accordingly. [3, 4] Parameter estimation usually utilizes the black box approach, where model parameters are estimated based on input and output data. The model structure can take several forms. Auto regressive models (ARX Auto Regressive model with external input, ARMAX Auto Regressive Moving Average model with external input, OE Output Error model,...) are one of the simplest, yet often sufficient model structures used in system identification. Those models have a simple transfer function form and work under the assumption that the current output value depends on finite number of previous outputs, inputs and some stochastic term e(t). [3] The role of classical adaptive control was later taken over by artificial intelligence, especially by Neural Networks (NN). Various neural network architectures with differing transfer function types are used to learn the system dynamics and even to generate appropriate control action. Application of NNs in this manner can be seen for example in P. Henaff, M. Milgram, J. Rabit (1994), C. Chen, Z. Liu, K. Xie, Y. Zhang, C. L. P. Chen (2017) or H. Leeghim, D. Kim (2015). Although the neural network approach undoubtedly works, it has several properties that could be seen as problematic. One of those properties is the black-box approach, which does not allow us to see how the training and prediction itself works. Another problem of NNs is a large amount of data required for training. [4] The Q-learning method used in this thesis does not suffer from those problems. The research part of this work explains in depth the principle of Q-learning and shows on several examples the learning process itself. Q-learning is also capable of incorporating a priori known information. In our case, it can be shown that the Q-function learned by the algorithm is quadratic and positive definite, which substantially reduces the required amount of training data. The goal of this thesis is to develop a self-learning algorithm for an intelligent thermostat. The algorithm should be able to find an optimal controller based on the provided data. The data available for learning include room temperature, reference temperature, outside temperature, and applied input (heat flow generated by the boiler). The first chapter explores and describes the already known theory of optimal control and Q-learning, which was found useful for later use. It mostly includes a description of Q-learning, machine learning in general and Model Predictive Control (MPC). Then, the problem of developing the self-learning thermostat with all limitations and constraints is introduced. This chapter also contains a short general description of the controlled system. The next section introduces the realistic thermal model, which is later used to evaluate and test all major results 1
14 Introduction of this thesis. The following chapter is related to the first one as it shows how were the already known theoretical results modified, combined and applied to the problem to derive the three main techniques of this thesis: Controllers comparison, optimal controller generation and merging Q-learning and MPC. This section also includes a solution to a series of partial problems that had to be addressed. The chapter ends with several tests proving the concept, justifying the use of Q-learning, showing its advantages and exposing few complications. The final chapter includes several experiments verifying the three major developed techniques on the thermal model. The evaluation of achieved results is summarized in conclusion. 2
15 Research 2 Research This chapter describes the ideas, techniques, and methods used later in the text as a solution to given problems. Those solutions are heavily based on Q-learning and reinforcement learning in general. Therefore, those techniques will be described more in detail. The chapter ends with the description of the Model Predictive Control (MPC). 2.1 Reinforcement Learning Reinforcement learning is a type of machine learning inspired by learning in nature. Animals are given some rewards or punishments from the environment and modify their behavioral strategies accordingly. From the control engineering point of view, it can be classified as both optimal and adaptive control, since given sufficient amount of exploration data, the reinforcement learning methods converge to an optimal control law without knowing the system dynamics beforehand [2] Markov Decision Processes Markov decision processes (MDP) are used when a decision-making situation has to be modeled in a way that in a state x, the decision maker has several inputs 1 (actions) u to choose from, and after applying one of them, the system randomly moves to a state x. The Markov decision process is defined as a (X, U, P, R), where X is the set of all states and U is the set of all inputs. P or Px,x u are the transition probabilities for every state x X and input u U of transitioning from x to x X. R or Rx,x u is the reward gained after transitioning from x X to x X using the input u U. If sets X and U are finite, then it is called a finite Markov decision process. Most of the reinforcement learning theory was developed for the finite Markov decision processes framework. Therefore, using this theory for continuous systems with an infinite state and input space may (and will) cause some problems, which will be described as they come later in the text [1, 2] Bellman Optimality Equation To explain the reinforcement learning in the most understandable way, it will be first explained on a simple example, and generalizations will come later. Suppose first that we have a deterministic discrete time-invariant model, which can be described by a set of linear difference equations written in a simple form as x t+1 = f(x t, u t ), (1) where x is the system state, and u is the input to the system. The model can be seen as a special case of the MDP, where the transition probabilities are replaced by a transition law (1). 1 The terminology and labeling used for the values to be applied to a system differ from field to field. Control engineers use inputs u, but actions a are used in machine learning and artificial intelligence in general. Since this thesis utilizes machine learning methods for control purposes, the inputs u notation will be used from now on. 3
16 Research Now a cost 2 r(x t, u t ) can be defined as a function of x and u at time t, which is the expected immediate cost paid after a transition from x t to x t+1 using the input u t. How exactly does r depend on x t and u t will be defined later. Next, a deterministic policy has to be defined. Suppose X is the set of all states and U is the set of all inputs. Policy u = π(x) then defines which u from U should be applied when the system is in state x for every x X. The value of a policy π as a function of x t is defined as a sum of all future costs when starting in state x t and using the input u given by policy π from now on V π (x t ) = γ k r(x t+k, u π t+k). (2) k=0 Notice that the future costs are reduced by the discount factor γ (0 γ < 1). The optimal policy π is defined as and its corresponding optimal value is defined as π (x t ) = arg min π V π (x t ) (3) V (x t ) = min π V π (x t ). (4) Since the value of a policy π is defined as a sum, equation (2) can also be written as V π (x t ) = r(x t, u π t ) + γ γ k r(x t+k, u π t+k), (5) k=1 V π (x t ) = r(x t, u π t ) + γv π (x t+1 ). (6) Equation (6) is called the Bellman equation. Using (3),(4) and (6), the Bellman optimality equations can be derived V (x t ) = min u t [r(x t, u t ) + γv (x t+1 )], (7) π (x t ) = arg min u t [r(x t, u t ) + γv (x t+1 )]. (8) If the finite MDP framework is considered, the function V can be represented as an n-dimensional vector (since the state and input spaces are finite and discrete), where n is the dimension of the vector x t. n can also be called the order of the system. On the other hand, if the state and input spaces were to be infinite and continuous, the V function would then be represented by a n-dimensional continuous function. This generalization makes the identification of such function way more complex, and some reinforcement learning methods cannot be used at all. 2 Typical problems solved by artificial intelligence are defined so that the goal is to maximize a reward (labeled as r). On the contrary, typical control engineering problem is designed in a way that the goal is to minimize certain cost. Since those concepts are to some extent similar, the labeling from the artificial intelligence persists even in this context. 4
17 Research Dynamic Programming Dynamic programming is closely related to the Bellman equation and Bellman optimality equation. It is a backward-in-time offline method for finding an optimal policy for given problem. This approach requires knowledge of the system dynamics and expected costs. Since the method runs offline backward in time, some finite horizon h has to be set. Therefore, the optimization will be performed on a finite set of time steps between times t and t + h. Since the time t + h is last for the optimization, a reward r h (x t+h ) depending only on the state x t+h at time t + h has to be defined. As stated before, dynamic programming runs backward in time and uses the Bellman equation, therefore, the value has to be computed at time t + h first. Since there is no t + 1 time, the equation (7) for time t + h can be written as V (x t+h ) = r h (x t+h ). (9) The value is computed as some function of x t+h for every x X. Then the algorithm runs backward in time computing optimal input u and value V (x t+k ) for each state x X according to the Bellman optimality equation (10) and equation (11) at every time step k between times t and t + h. V (x k ) = min u k [r(x k, u k ) + γv (x k+1 )] (10) u = arg min u k [r(x k, u k ) + γv (x k+1 )] (11) Example of a simple problem solved by dynamic programming is shown in Figure Policy Evaluation Considering the MDP framework, a way to compute the value function V π for given policy π has to be found. This process is called a policy evaluation, and it utilizes the dynamic programming approach. The Bellman equation (6) can be understood in a way that its left side is at iteration k + 1 and the right side is at iteration k V π k+1(x t ) = r(x t, u π t ) + γv π k (x t+1 ). (12) Indices k and k + 1 are related to iterations of the policy evaluation, indices t and t + 1 relate to time. The initial approximation of V π can be chosen arbitrarily. If it is then updated using equation (12), it can be shown that as k, the sequence of V π approximations converges to the true V π, if the steady-state V π exists. This iterative process is usually stopped once the difference between Vk π and Vk+1 π is lower than some threshold. The update from V π k to Vk+1 π can be performed in at least two ways. Either the Vk π as it is and it is used to separately compute Vk+1 π for all x X, or Vk π is kept intact and Vk+1 π merge as Vk π is updated for all x X and the old value is immediately rewritten by the new one. Both of these two approaches converge to V π as k. [1] This value function computation approach can be used without the knowledge of transition probabilities between states (system dynamics) only if, as so far considered, the system is 5
18 Research (a) The reward r(x t+h ) is computed for every x X. (b) Reward for all u U is computed at each state x X. (c) u (from all u U) and value V (x t+k ) is found for each state x X. (d) Once the value is computed for all x X at every time step k between times t and t + h, the optimal policy for each x X at time t (red) can be shown. Figure 1: Dynamic programming example deterministic. However, it cannot be used without known system dynamics, if the system has a stochastic part (see 2.2.6). Therefore, some other way to obtain the value function has to be found. 2.2 Q-Learning Q-learning is a type of reinforcement learning. It is a data-driven approach, where the system dynamics is unknown. The Bellman equation (6) can be modified and used in Q-learning. First, a new function has to be defined using the Bellman function (6) Q π (x t, u t ) = r(x t, u t ) + γv π (x t+1 ). (13) This new Q-function (also called action-value function ) has a similar meaning as the V-function defined by (2), but it is a function of x t and u t as opposed to only being function of x t in the case of the V-function. The Q-function tells us how would the value of strategy π change if the input u t was applied at time t and then the policy π would be used from time t + 1 onward. Since the equation (13) can be written as the Bellman equation V π (x t ) Q π (x t, u π t ), (14) Q π (x t, u t ) = r(x t, u t ) + γq π (x t+1, u π t+1). (15) 6
19 Research The Bellman optimality equation as a Q-function equivalent of (7) is Q (x t, u t ) = r(x t, u t ) + γ min u t+1 Q (x t+1, u t+1 ). (16) Q-Function Identification Learning Considering a Q-learning problem, the only information given by the system after applying the input u t, is the state x t, reward r(x t, u t ), and state x t+1. Therefore, the Q-function has to be identified using only these values { u t, x t, x t+1, r(x t, u t ) }. The equation (15) can be rewritten to fit a temporal difference form r(x t, u t ) = Q π (x t, u t ) γq π (x t+1, u π t+1). (17) This equation only holds for a deterministic system and already identified Q-function. Generally, there is a temporal difference error e t defined as e t = r(x t, u t ) Q π (x t, u t ) + γq π (x t+1, u π t+1). (18) During the learning period, such e t is obtained at each time step t. The goal now is to find such Q that the e t at each time t is minimized, which can be achieved for instance by the Recursive Least Squares (RLS) method. On-Policy Learning If the equation (17) uses u t chosen by the policy π as in (19), then the Q-function identification method is described as on-policy learning. Therefore, the state x t+1 is a result of π. r(x t, u π t ) = Q π (x t, u π t ) γq π (x t+1, u π t+1) (19) Off-Policy Learning On the other hand, the Q-function of a policy π 1 can be evaluated while inputs given by some other policy π 0 are applied. This method is called off-policy learning and is described by equation (20). In this case, the state x t+1 is a result of using policy π 0 even though policy π 1 is being evaluated. r(x t, u π 0 t ) = Q π 1 (x t, u π 0 t ) γq π 1 (x t+1, u π 1 t+1) (20) Policy Iteration Policy iteration and value iteration are dynamic programming based methods used to obtain the optimal value function V or optimal Q-function Q, and their corresponding optimal policy π. As the name of those methods suggests, the process of finding such optimal function and policy is iterative. Let us start with the policy iteration method. Suppose we have some policy π 0. Its corresponding value function V π 0 (or Q-function Q π 0 since V π 0 (x t ) = Q π 0 (x t, u π 0 t )) can be computed either by the policy evaluation method (2.1.4), which 7
20 Research can only be performed on-policy or on/off-policy learning (2.2.1). The policy improvement theorem then states that ( ) ( ) Q π 0 (x t, u π 1 t ) Q π 0 (x t, u π 0 t ) Q π 1 (x t, u π 1 t ) Q π 0 (x t, u π 0 t ) x t X. (21) In other words, if the Q-function corresponding to the policy π 0 has lower or equal value for all states x t X after the input given by the policy π 1 was applied, than if the input given by the policy π 0 was applied, then the policy π 1 is as good as, or better than policy π 0. The proof of this theorem is trivial, and it is given in [1]. This shows that given some policies π 0 and π 1, it can be easily evaluated if the policy π 1 lowers the value at some state x t. Now, a way has to be found to find a policy which lowers (or at least not increases) the value at all states. This is achieved by π 1 (x t ) = arg min u t Q π 0 (x t, u t ). (22) Equation (22) has to be computed for all states x X, and each computation is done over all u U. This way a new policy π 1 is obtained. Simply put, this new policy does what is best at one step look-ahead. As proved by the policy improvement theorem, the newly obtained policy π 1 is as good as, or better than the original policy π 0. This process is called a policy improvement. Once a policy improvement has been applied to obtain a policy π 1, a value of such policy can be evaluated and then improved again to get even better policy π 2. This method, where policy improvement alternates with policy evaluation is called a policy iteration. Since each policy π i+1 is better (or as good as) π i, inevitably a point, where the optimal π is found, has to be reached. This occurs when V π i+1 = V π i = V. This convergence is conditioned by the finite input and state spaces of the MDP framework, which results in a finite number of policies. Using the infinite continuous input and state spaces, resulting in an infinite number of policies, no such convergence can be guaranteed in a finite number of steps. This topic will be discussed more in section The policy iteration theory will be supported by a simple example. The convergence will be tested on a simple, stable discrete second-order system, with known optimal (with respect to some criterion) policy in the form of state feedback controller u = π(x) = Kx. The policy iteration will be used to find the same optimal policy. The gain K consists of five individual values, corresponding to the augmented state vector x aug = [ 1 x t s t u t 1 ], where xt is the state of the second-order system (therefore, two values), s t is the set-point at time t and u t 1 is the input at time t 1. The iteration starts from a K 0 = [ ], which is stabilizing since the system itself is stable. The algorithm is stopped if K i K i , (23) where i denotes the iteration step. Figure 2 shows, how the values of the optimized policy converged to those of the known optimal gain. The stopping condition (23) was met at the seventh iteration. Since the policy was evaluated using the least squares method and the policy evaluation method as described in was not used, no inner iterations are present. 8
21 Research Optimized Gain Values Known Optimal Gain Values Iteration Figure 2: Policy Iteration Example Value Iteration The value iteration algorithm is similar to the policy iteration. The policy evaluation step (2.1.4) of the policy iteration algorithm is actually an iterative process itself. The value of a policy V π is in theory found in an infinite number of steps. The idea behind value iteration is stopping this policy evaluation only after one step and combining it with the policy improvement. The idea comes from the Bellman optimality equation (7), except that it is seen as an update of some value V, similarly as in policy evaluation (2.1.4). V k+1(x t ) = min u t [r(x t, u t ) + γv k (x t+1 )]. (24) The equation (24) has to be yet again computed for all states x X, and each computation is done over all u U. Considering the LQ problem, the value iteration turns into the Riccati differential equation. This will be shown in a section Same as policy iteration, value iteration theory, too, will be supported by a simple example. The setup stays the same, except this time, the policy u = π(x) = Kx will be found using the value iteration. Figure 3, again, shows the convergence of the value iteration algorithm. Using value iteration, the stopping condition (23) was met at the 62nd iteration. 9
22 Research 1 Optimized Gain Values Known Optimal Gain Values Iteration Figure 3: Value Iteration Example Problems with Policy and Value Iteration The first problem arises when the policy iteration is applied to the infinite continuous state and input spaces framework. The value function cannot be simply evaluated by the policy evaluation method (2.1.4). It has to be evaluated in the form of a Q-function by the on/off-policy learning (2.2.1). As said before, the Bellman equation can be viewed either as an update rule (25) or as an equation (26) or set of equations, which can be solved to obtain some value V π. V π k+1(x t ) = r(x t, u t ) + γv π k (x t+1 ) (25) V π (x t ) = r(x t, u t ) + γv π (x t+1 ) (26) The Bellman equation (25) can be also seen as a functional equation, where Vk π (x t+1 ) is mapped into Vk+1(x π t ). Then for 0 γ < 1 and V π finite, it can be shown that such mapping is so-called contraction mapping with a unique fixed point [11]. This means that given some two initial guesses on V π, let us denote them V0a π and V0b, π their values get closer and closer (contraction) as the iteration converges. The fact that the contraction mapping has a unique fixed point means that if the iteration converges, it always converges to the same final V π = Vka π = Vkb, π no matter what the initial V0 π was. The same V π can be obtained by solving the equation or set of equations given by (26). Considering the MDP framework, where the state and input spaces are finite, and also the rewards are finite, this holds for any initial policy π 0 as long as 0 γ < 1. If the state and 10
23 Research input spaces are not finite, and the initial policy is destabilizing, the iterative way to obtain the value function will not generally converge. It may diverge to the infinity even if the discount factor gamma is bounded as before. Using the second approach (26), some value function will be identified, but it will be wrong, and it will not have any physical meaning. Therefore, policy iteration requires the initial policy to be stabilizing to ensure convergence to the optimal value function. On the other hand, value iteration does not require the initial policy to be stabilizing; however, its convergence is significantly slower compared to the policy iteration, as demonstrated by tests in and An example, supporting the need for stabilizing initial policy for policy iteration and conversely that it is not needed while using the value iteration, will be given here. The setup remains identical to the one used in and with one exception. The system is now unstable, which in turn makes the initial policy K 0 = [ ] not stabilizing. As expected, policy iteration did converge to some policy (Figure 4), but it was not the optimal one and maybe not even a stabilizing one. Value iteration did find the optimal policy. However, it took nearly 3000 iterations to converge (Figure 5). Usually, this convergence speed versus need for stabilizing initial policy trade-off creates a dilemma: Which approach should be used? Fortunately, in our case, the system itself is stable. The room temperature will tend to decrease to the ambient temperature. Therefore, the initial policy K 0 = [ ] is undoubtedly stabilizing and the faster approach policy iteration can be used here without any concerns Optimized Gain Values Known Optimal Gain Values Iteration Figure 4: Policy Iteration Example with Not Stabilizing Initial Policy 11
24 Research Optimized Gain Values Known Optimal Gain Values Iteration Figure 5: Value Iteration Example with Not Stabilizing Initial Policy Relation with Algebraic Riccati Equation Let us assume that the problem to be solved by the Q-learning methods will be given in an LQ (Linear Quadratic) form. Therefore, the system description is linear and given by and the quadratic criterion is given by x t+1 = f(x t, u t ) = Ax t + Bu t (27) J t = 1 2 ( ) x T i Qx i + u T i Ru i + 2x T i Nu i, (28) i=t where Q, R and N are weight matrices. For our purposes the last term with the weight matrix N can be omitted. The reward will also take quadratic form r(x t, u t ) = 1 2 ( x T t Qx t + u T t Ru t ). (29) Considering this new form of reward, the value function corresponding to a policy π given by u t = π(x t ) changes for γ = 1 from (2) to V π (x t ) = 1 2 ( ) x T i Qx i + u T i Ru i. (30) i=t 12
25 Research Analogically to (5) and (6) we then get V π (x t ) = 1 2 ( x T t Qx t + u T t Ru t ) V π (x t ) = 1 2 i=t+1 The result of an LQR problem is a controller in a form of ( x T i Qx i + u T i Ru i ), (31) ( x T t Qx t + u T t Ru t ) + V π (x t+1 ). (32) u t = f(x t ) = Kx t. (33) Since u t = Kx t, then according to (30), V π (x t ) is an infinite sum of quadratic functions depending on x t therefore, the value will also have a quadratic form depending only on x t with a symmetric kernel matrix P, such as which changes (32) to By substituting (27) and (33) into (35), we get V π (x t ) = 1 2 xt t P x t, (34) 1 2 xt t P x t = 1 ( ) x T 2 t Qx t + u T 1 t Ru t + 2 xt t+1p x t+1. (35) 1 2 xt t P x t = 1 ( ) x T 2 t Qx t + u T 1 t Ru t + 2 (Ax t + Bu t ) T P (Ax t + Bu t ), (36) x T t P x t = x T t Qx t + ( Kx t ) T R ( Kx t ) + (Ax t + B ( Kx t )) T P (Ax t + B ( Kx t )), (37) x T t P x t = x T t Qx t + (Kx t ) T RKx t + (Ax t BKx t ) T P (Ax t BKx t ), (38) x T t P x t = x T t Qx t + x T t K T RKx t + ( x T t A T x T t K T B T ) P (Ax t BKx t ), (39) x T t P x t = x T t ( Q + K T RK ) x t + x T t ( A T K T B T ) P (A BK) x t, (40) P = Q + K T RK + ( A T K T B T ) P (A BK), (41) (A BK) T P (A BK) P = ( Q + K T RK ), (42) where (42) is a Lyapunov equation. Therefore, for a given K (or π), the Lyapunov equation is an LQR equivalent of the Bellman equation (6). By taking (36) and subtracting 1 2 xt t P x t from the left side, we get 13
26 Research 0 = x T t Qx t + u T t Ru t + (Ax t + Bu t ) T P (Ax t + Bu t ) x T t P x t, (43) which only holds for u t = Kx t, with one concrete K. Generally, (43) has nonzero left side H(x t, u t ) = x T t Qx t + u T t Ru t + (Ax t + Bu t ) T P (Ax t + Bu t ) x T t P x t, (44) where H(x t, u t ) is a Hamiltonian function, which is an LQR equivalent to the temporal difference error e t, used in (18). [2] The similarity between (17), (18) and (43), (44) is caused by the fact that (43) is a Bellman equation (17) for the LQR version of this problem. [2] To find the gain K, the necessary optimality condition H(x t, u t )/ u t = 0 has to be fulfilled. The procedure is shown below, starting by modifying (44). H(x t, u t ) = x T t Qx t + u T t Ru t + ( x T t A T + u T t B T ) P (Ax t + Bu t ) x T t P x t, (45) H(x t, u t ) = x T t Qx t +u T t Ru t +x T t A T P Ax t +x T t A T P Bu t +u T t B T P Ax t +u T t B T P Bu t x T t P x t, (46) Then by setting H(x t, u t )/ u t! = 0, we get H(x t, u t ) u t = 2Ru t + 2B T P Ax t + 2B T P Bu t. (47) 0 = 2Ru t + 2B T P Ax t + 2B T P Bu t, (48) 0 = ( R + B T P B ) u t + B T P Ax t, (49) Therefore, the optimal strategy π with gain u t = ( R + B T P B ) 1 B T P Ax t = Kx t. (50) was found. K = ( R + B T P B ) 1 B T P A (51) By evaluating K using the Lyapunov equation (42), we are performing policy evaluation. If P obtained by such evaluation is used to generate new K (using (51)), policy improvement is being executed. Therefore, by alternating between (42) and (51), we are performing the policy iteration algorithm (2.2.2), which is in control theory known as Kleinman algorithm [17]. By modifying the Lyapunov equation (42) and substituting the gain (51) into it, we get 0 = A T P A A T P BK K T B T P A + K T B T P BK P + Q + K T RK, (52) 14
27 Research 0 = A T P A P + Q A T P BK + K T ( B T P BK + RK B T P A ), (53) 0 = A T P A P + Q A T P BK + K T (( B T P B + R ) K B T P A ), (54) 0 = A T P A P + Q A T P B ( R + B T P B ) 1 B T P A ( (B + K T T P B + R ) ( R + B T P B ) ) 1 B T P A B T P A, (55) 0 = A T P A P + Q A T P B ( R + B T P B ) 1 B T P A, (56) where (56) is an Algebraic Riccati Equation, which is an LQR equivalent of the Bellman optimality equation (7). This result, combined with (34) shows that the value of a policy given by u t = π(x t ) = Kx t can be obtained by solving the Algebraic Riccati Equation, given the system dynamics are known. In other words, if the system dynamics are known, there are two ways to obtain the value of a policy. One is to use the Algebraic Riccati or Lyapunov equation and the second one was described in section The fact that there is more than one way to obtain the value of a policy can be used to verify that the machine learning approach gives the same results as the already known and rigorous approach in the form of the Algebraic Riccati or Lyapunov equation. Such comparison will be given in section Q-Learning for Stochastic Systems For simplicity, stochastic systems were not considered until now. Although Q-learning or reinforcement learning methods in general are perfectly usable for deterministic systems, they are intended to be used on stochastic systems. There are several reasons supporting it, one of them being that there already are simpler methods for deterministic systems, which achieve the same goal as reinforcement learning. Simply put, using reinforcement learning on deterministic systems is a bit of an overkill. The main characteristic of stochastic systems is that, even though the current state of the system or Markov decision process and the applied action are known, the next state cannot be predicted with 100 % certainty. In Markov decision processes this means that there are some transition probabilities of transitioning from the current state x t to all states x t+1 X, given the applied input at time t, as described in section If we were to use the discrete time-invariant model (27), the stochastic part e t would have to be added (57). x t+1 = Ax t + Bu t + e t, (57) The Bellman equation (6) and Bellman optimality equation (7) change their forms to (58) and (59) respectively. Notice the newly added expected value operator, V π (x t ) = r(x t, u π t ) + γe {V π (x t+1 ) x t, u t }, (58) 15
28 Research V (x t ) = min u t [r(x t, u t ) + γe {V (x t+1 ) x t, u t }]. (59) Using the Q-function, their respective forms are Q π (x t, u t ) = r(x t, u t ) + γe { Q π (x t+1, u π t+1) x t, u t }, (60) Q (x t, u t ) = r(x t, u t ) + γe { } min Q (x t+1, u t+1 ) x t, u t. (61) u t+1 The transition from deterministic to stochastic systems has a significant influence on the Q- function identification (2.2.1). If the system is deterministic, only a finite number of equations (18) (time steps) has to be obtained to find the Q-function, which minimizes e t in each equation. On the other hand, if the system is stochastic, one would need an infinite number of time steps to truly identify the Q-function. 2.3 Model Predictive Control (MPC) The main reason for the use of Model Predictive Control in this thesis is its ability to deal with hard constraints, which are generally caused by physical limitations of actuators. In our case, the need for constraint support rises mostly from the fact that while controlling the temperature in a room, a negative action input (cooling) is often not possible and certainly not wanted, as such cooling would be seen as a waste of energy. This chapter includes information acquired from [5]. The MPC works on a finite horizon of length N from time 0 to time N. Its goal is to minimize given criterion J(x, u) over the set of inputs and states J (x 0 ) = min u 0, u 1,..., u N 1 x 0, x 1,..., x N J(x, u) = min u 0, u 1,..., u N 1 x 0, x 1,..., x N φ(x N )+ N 1 k=0 L k (x k, u k ). (62) The minimization can be performed with respect to constraints, such as x k+1 = f(x k, u k ), (63) x 0 = given initial state, (64) u min u k u max, (65) x min x k x max, (66) where (63) describes the system dynamics, (64) is the initial state, and (65) and (66) are constraints on input and state. 16
29 Research MPC Without Constraints If the controlled system is linear and there are no constraints on state and input, equations (63 66) take the form x k+1 = Ax k + Bu k, (67) Then we consider a quadratic criterion, such as x 0 = constant. (68) J(x, u) = 1 2 xt NQ N x N N 1 k=0 ( x T k Qx k + u T k Ru k ) = 1 2 X T Q X + 1 2ŪT RŪ xt 0 Qx 0. (69) The minimization problem then changes to where min X,Ū J( X, Ū) = min X,Ū 1 2 X T Q X + 1 2ŪT RŪ xt 0 Qx 0, (70) X = x 1 x 2. x N, Ū = u 0 u 1. u N 1, (71) Q = diag ([ Q Q... Q Q N ]), R = diag ([ R R... R ]), and 1 2 xt 0 Qx 0 is a constant. Since the constant does not change the position of the minimal value, this term can be omitted. Constraints, as described in (67), can be written as where X = Ā X + BŪ + A 0x 0, (72) Ā = 0 A 0 A A 0 ([ ]), B = diag B B... B, A0 = A (73) Equation (72) can be further rewritten as 0 = ( Ā I ) X + BŪ + A 0 x 0, (74) 17
30 Research 0 = [ ( Ā I ) B ] [ XŪ ] + A 0 x 0, (75) 0 = Ãw + b. (76) If we rewrite the minimization criterion (69), the minimization problem without constraints on the input or state can be written as min w w T [ Q 0 0 R ] w. (77) Subject to 0 = Ãw + b, (78) which is a problem which can be solved by quadratic programming (Section 2.3.3) Constraints on the Input Vector For us to be able to add constraints on the input, the set of state vectors X has to be eliminated from the criterion. Fortunately, X can be eliminated in a simple step-by-step way, as shown below. x 1 = Ax 0 + Bu 0, (79) x 2 = Ax 1 + Bu 1 = A 2 x 0 + ABu 0 + Bu 1, (80) Therefore, X can be written as x k = A k x 0 + [ A k 1 B A k 1 B... AB B ] Ū. (81) X = ˆBŪ + Âx 0 = B AB B A 2 B AB B A N 1 B A N 2 B... AB B Ū + A A 2. A N x 0. (82) By substituting (82) into (69) we get J(Ū, x 0) = 1 ( ˆBŪ + 2 Âx 0 )T Q ( ˆBŪ + Âx 0) + 1 2ŪT RŪ, (83) J(Ū, x 0) = 1 2ŪT ˆBT Q ˆBŪ + x T 0 ÂT Q ˆBŪ + x T 0 ÂT QÂx ŪT RŪ, (84) J(Ū, x 0) = 1 2ŪT ( ˆBT Q ˆB + R) Ū + x T 0 Â T Q ˆBŪ + x T 0 ÂT QÂx 0, (85) 18
31 Research J(Ū, x 0) = 1 2ŪT ( ˆBT Q ˆB + R) Ū + x T 0 Â T Q ˆBŪ. (86) J(Ū, x 0) = 1 2ŪT HŪ + xt 0 F T Ū. (87) Because x T 0 ÂT QÂx 0 is a constant and does not change the position of the minimal value, it could be omitted. The minimization problem then changes to With constraints min Ū 1 2ŪT HŪ + xt 0 F T Ū. (88) u k u max and u k u min, (89) which is also a problem which can be solved by quadratic programming Quadratic Programming Quadratic programming is a general term denoting a set of algorithms solving the quadratic optimization (minimization for M > 0 or maximization for M < 0) problem with linear constraints. Since our goal is to minimize the reward, then without loss of generality, only the minimization problem will be considered henceforth. The optimization problem can be written as subject to the constraints 1 min x 2 xt Mx + n T x, (90) Ax b, (91) A eq x = b eq, (92) b l x b u, (93) where (91) represents inequality constraints, (92) are equality constraints and b l and b u in (93) represent the lower and upper bounds respectively. 19
32 Research Unconstrained Optimization There are many algorithms able to solve such optimization problems. If only the easiest form of this problem is considered, which is just the optimization problem (90) without any constraints, the solution is simple (provided the function f(x) is convex). The first order necessary condition of optimality states that the gradient of the optimized function equals zero. In our case, if we denote then f (x) = 1 2 xt Mx + n T x, (94) f (x)! = 0, (95) df = T f (x) dx, (96) df = 1 2 ( dx T Mx + x T Mdx ) + n T dx, (97) df = 1 2 ( x T M T dx + x T Mdx ) + n T dx, (98) df = ( x T M T + M 2 + n T ) dx. (99) Therefore, according to (96) and (99), f (x) = M T + M x + n. (100) 2 Under the assumption of M T = M, which is satisfied in our case, (100) further simplifies to f (x) = Mx + n. (101) Since the initial condition (95) has to be fulfilled, equation (101) can be rewritten as Mx + n = 0, (102) which is just a set of linear equations, whose solution is denoted as the critical point, where the minimum of f (x) may be found. The second order necessary and sufficient conditions are given by the Hessian 2 f (x) = M. (103) The so-called second derivative test states that if the Hessian is positive definite, the critical point is the minimum of the optimized function f (x). On the other hand, if the Hessian is negative definite, the critical point forms the maximum of the function f (x). If there are both positive and negative eigenvalues of the Hessian, then the critical point is a saddle point. In all other cases, the test does not give an answer. 20
33 Research Constrained Optimization The constrained optimization problem is usually more complex than the unconstrained one, and its solution cannot be obtained just by solving a set of linear equations. Therefore, more sophisticated methods have to be used. Since most, if not all, of the work done on this thesis, was done using the Matlab [6] or Matlab/Simulink environment [7], this section will concentrate on the constrained quadratic optimization solving algorithms used by the Matlab environment. According to [8], Matlab uses three algorithms to solve quadratic programming problems described by (90-93). Those algorithms are: Interior-point-convex: solves convex problems with any combination of constraints Trust-region-reflective: solves bound constrained or linear equality constrained problems Active-set: solves problems with any combination of constraints A detailed description of those algorithms is given in the Matlab documentation [9]. Even though there are three distinct algorithms used by Matlab, there is only one function for solving quadratic programming problems. The function is called quadprog, and its description can be found in the documentation [10] Receding Horizon Notice that the solution to the minimization problem, the control sequence, does not use any feedback from the controlled system besides the initial state x 0. It purely relies on the model. Such control is called a feed-forward control. The most significant disadvantage of this approach is a possible divergence between the state vector predicted by the model and the state of the real controlled system. This divergence is caused by some combination of the inaccuracy of the model and the process noise of the system. The MPC works on a principle called receding horizon. This principle solves this problem in an elegant way. The quadratic programming solves the given optimization problem on the horizon of length N and produces an input sequence u. Then, only the first input from this sequence is actually applied, and the state of the system is measured. This new state is used as an initial state for the optimization problem in the next step. The receding horizon approach ensures that the aforementioned divergence does not affect control of the system. This approach of using the receding horizon is sometimes called open-loop feedback. 21
34
35 Problem Description 3 Problem Description As stated in the introduction, the goal of this thesis is to develop an automatic self-tuning controller for regulating temperature (thermostat). Such controller should be able to find an optimal control law solely based on the system s input, output and predefined reward, which itself is a function of the input and output. The final control law should be able to deal with constraints on the input introduced by the boiler. 3.1 Controlled System For simplicity, the performance of our controllers will be tested on one zone (room) only, which is a SISO problem. However, all results developed in this thesis are applicable to the whole building (MIMO problem). The temperature regulation in a building depends on several factors. The first one is the heat capacity (C [ ] J kg m2 = K K s ), which says how much heat is required to 2 be added to the system to change the temperature by 1 K. This quantity is mostly affected by proportions of the building, by the material used during the construction and also by the furnishing. The second important factor is the heat transfer coefficient (U [ W m 2 K = kg K s 3 ] ), which is in our case used to determine how much heat transfers through 1 m 2 area of some object (e.g. a wall) from one side to the other if the difference in temperature between those sides is 1 K. Since the heat transfer coefficient is just an inverse of the thermal resistance, the value is mostly affected by the amount and type of insulation used on the building and the insulation properties of the construction material itself. Apart from the control input, the temperature regulation can also be affected by additional inputs, such as solar heat gain (increase in temperature as the solar radiation is absorbed), number of people in the building or even the presence of powerful computers or servers. 3.2 Demands on the Control Quality The difference between typical control and reinforcement learning problem does not lie only in maximizing or minimizing reward or cost. There are differences in the demanded performance and convergence speed. Most of the algorithms applied to machine learning problems, which had been proved to be optimal need a massive amount of data (which takes time to generate) to reach the convergence. Therefore, these approaches cannot be used in the intelligent thermostat. The thermostat has to be able to control the room temperature since the moment of installation. There definitely cannot be a week, month or even a year-long learning period. As mentioned above, the learning period has to be few days long at maximum (given those days contain enough useful data more in section 5.4). However, this is not the only requirement on the learning period. The thermostat has to be able to control the system even during the learning period, though obviously, the control does not have to be optimal. There are only two mild demands on the control during the learning phase. The controller has to be stabilizing, and its actions must not cause the room temperature to diverge from the reference temperature to a degree which could be recognized by the residents. 23
Transportation Problem
Transportation Problem ١ C H A P T E R 7 Transportation Problem The transportation problem seeks to minimize the total shipping costs of transporting goods from m origins (each with a supply s i ) to n
Využití hybridní metody vícekriteriálního rozhodování za nejistoty. Michal Koláček, Markéta Matulová
Využití hybridní metody vícekriteriálního rozhodování za nejistoty Michal Koláček, Markéta Matulová Outline Multiple criteria decision making Classification of MCDM methods TOPSIS method Fuzzy extension
Gymnázium, Brno, Slovanské nám. 7 WORKBOOK. Mathematics. Teacher: Student:
WORKBOOK Subject: Teacher: Student: Mathematics.... School year:../ Conic section The conic sections are the nondegenerate curves generated by the intersections of a plane with one or two nappes of a cone.
WORKSHEET 1: LINEAR EQUATION 1
WORKSHEET 1: LINEAR EQUATION 1 1. Write down the arithmetical problem according the dictation: 2. Translate the English words, you can use a dictionary: equations to solve solve inverse operation variable
Introduction to MS Dynamics NAV
Introduction to MS Dynamics NAV (Item Charges) Ing.J.Skorkovský,CSc. MASARYK UNIVERSITY BRNO, Czech Republic Faculty of economics and business administration Department of corporate economy Item Charges
GUIDELINES FOR CONNECTION TO FTP SERVER TO TRANSFER PRINTING DATA
GUIDELINES FOR CONNECTION TO FTP SERVER TO TRANSFER PRINTING DATA What is an FTP client and how to use it? FTP (File transport protocol) - A protocol used to transfer your printing data files to the MAFRAPRINT
USING VIDEO IN PRE-SET AND IN-SET TEACHER TRAINING
USING VIDEO IN PRE-SET AND IN-SET TEACHER TRAINING Eva Minaříková Institute for Research in School Education, Faculty of Education, Masaryk University Structure of the presentation What can we as teachers
CHAPTER 5 MODIFIED MINKOWSKI FRACTAL ANTENNA
CHAPTER 5 MODIFIED MINKOWSKI FRACTAL ANTENNA &KDSWHUSUHVHQWVWKHGHVLJQDQGIDEULFDW LRQRIPRGLILHG0LQNRZVNLIUDFWDODQWHQQD IRUZLUHOHVVFRPPXQLFDWLRQ7KHVLPXODWHG DQGPHDVXUHGUHVXOWVRIWKLVDQWHQQDDUH DOVRSUHVHQWHG
DC circuits with a single source
Název projektu: utomatizace výrobních procesů ve strojírenství a řemeslech egistrační číslo: Z..07/..0/0.008 Příjemce: SPŠ strojnická a SOŠ profesora Švejcara Plzeň, Klatovská 09 Tento projekt je spolufinancován
Tento materiál byl vytvořen v rámci projektu Operačního programu Vzdělávání pro konkurenceschopnost.
Tento materiál byl vytvořen v rámci projektu Operačního programu Vzdělávání pro konkurenceschopnost. Projekt MŠMT ČR Číslo projektu Název projektu školy Klíčová aktivita III/2 EU PENÍZE ŠKOLÁM CZ.1.07/1.4.00/21.2146
VYSOKÁ ŠKOLA HOTELOVÁ V PRAZE 8, SPOL. S R. O.
VYSOKÁ ŠKOLA HOTELOVÁ V PRAZE 8, SPOL. S R. O. Návrh konceptu konkurenceschopného hotelu v době ekonomické krize Diplomová práce 2013 Návrh konceptu konkurenceschopného hotelu v době ekonomické krize Diplomová
Compression of a Dictionary
Compression of a Dictionary Jan Lánský, Michal Žemlička zizelevak@matfyz.cz michal.zemlicka@mff.cuni.cz Dept. of Software Engineering Faculty of Mathematics and Physics Charles University Synopsis Introduction
VY_32_INOVACE_06_Předpřítomný čas_03. Škola: Základní škola Slušovice, okres Zlín, příspěvková organizace
VY_32_INOVACE_06_Předpřítomný čas_03 Autor: Růžena Krupičková Škola: Základní škola Slušovice, okres Zlín, příspěvková organizace Název projektu: Zkvalitnění ICT ve slušovské škole Číslo projektu: CZ.1.07/1.4.00/21.2400
Litosil - application
Litosil - application The series of Litosil is primarily determined for cut polished floors. The cut polished floors are supplied by some specialized firms which are fitted with the appropriate technical
Klepnutím lze upravit styl předlohy. nadpisů. nadpisů.
1/ 13 Klepnutím lze upravit styl předlohy Klepnutím lze upravit styl předlohy www.splab.cz Soft biometric traits in de identification process Hair Jiri Prinosil Jiri Mekyska Zdenek Smekal 2/ 13 Klepnutím
Aplikace matematiky. Dana Lauerová A note to the theory of periodic solutions of a parabolic equation
Aplikace matematiky Dana Lauerová A note to the theory of periodic solutions of a parabolic equation Aplikace matematiky, Vol. 25 (1980), No. 6, 457--460 Persistent URL: http://dml.cz/dmlcz/103885 Terms
UPM3 Hybrid Návod na ovládání Čerpadlo UPM3 Hybrid 2-5 Instruction Manual UPM3 Hybrid Circulation Pump 6-9
www.regulus.cz UPM3 Hybrid Návod na ovládání Čerpadlo UPM3 Hybrid 2-5 Instruction Manual UPM3 Hybrid Circulation Pump 6-9 CZ EN UPM3 Hybrid 1. Úvod V továrním nastavení čerpadla UPM3 Hybrid je profil PWM
Tabulka 1 Stav členské základny SK Praga Vysočany k roku 2015 Tabulka 2 Výše členských příspěvků v SK Praga Vysočany Tabulka 3 Přehled finanční
Příloha I Seznam tabulek Tabulka 1 Stav členské základny SK Praga Vysočany k roku 2015 Tabulka 2 Výše členských příspěvků v SK Praga Vysočany Tabulka 3 Přehled finanční odměny pro rozhodčí platný od roku
The Over-Head Cam (OHC) Valve Train Computer Model
The Over-Head Cam (OHC) Valve Train Computer Model Radek Tichanek, David Fremut Robert Cihak Josef Bozek Research Center of Engine and Content Introduction Work Objectives Model Description Cam Design
Gymnázium, Brno, Slovanské nám. 7, SCHEME OF WORK Mathematics SCHEME OF WORK. cz
SCHEME OF WORK Subject: Mathematics Year: first grade, 1.X School year:../ List of topisc # Topics Time period Introduction, repetition September 1. Number sets October 2. Rigtht-angled triangle October,
FIRE INVESTIGATION. Střední průmyslová škola Hranice. Mgr. Radka Vorlová. 19_Fire investigation CZ.1.07/1.5.00/
FIRE INVESTIGATION Střední průmyslová škola Hranice Mgr. Radka Vorlová 19_Fire investigation CZ.1.07/1.5.00/34.0608 Výukový materiál Číslo projektu: CZ.1.07/1.5.00/21.34.0608 Šablona: III/2 Inovace a zkvalitnění
Mikrokvadrotor: Návrh,
KONTAKT 2011 Mikrokvadrotor: Návrh, Modelování,, Identifikace a Řízení Autor: Jaromír r Dvořák k (md( md@unicode.cz) Vedoucí: : Zdeněk Hurák (hurak@fel.cvut.cz) Katedra řídicí techniky FEL ČVUT Praha 26.5.2011
Vánoční sety Christmas sets
Energy news 7 Inovace Innovations 1 Vánoční sety Christmas sets Na jaře tohoto roku jste byli informováni o připravované akci pro předvánoční období sety Pentagramu koncentrátů a Pentagramu krémů ve speciálních
Enabling Intelligent Buildings via Smart Sensor Network & Smart Lighting
Enabling Intelligent Buildings via Smart Sensor Network & Smart Lighting Petr Macháček PETALIT s.r.o. 1 What is Redwood. Sensor Network Motion Detection Space Utilization Real Estate Management 2 Building
Střední průmyslová škola strojnická Olomouc, tř.17. listopadu 49
Střední průmyslová škola strojnická Olomouc, tř.17. listopadu 49 Výukový materiál zpracovaný v rámci projektu Výuka moderně Registrační číslo projektu: CZ.1.07/1.5.00/34.0205 Šablona: III/2 Anglický jazyk
Czech Republic. EDUCAnet. Střední odborná škola Pardubice, s.r.o.
Czech Republic EDUCAnet Střední odborná škola Pardubice, s.r.o. ACCESS TO MODERN TECHNOLOGIES Do modern technologies influence our behavior? Of course in positive and negative way as well Modern technologies
2. Entity, Architecture, Process
Evropský sociální fond Praha & EU: Investujeme do vaší budoucnosti Praktika návrhu číslicových obvodů Dr.-Ing. Martin Novotný Katedra číslicového návrhu Fakulta informačních technologií ČVUT v Praze Miloš
DATA SHEET. BC516 PNP Darlington transistor. technický list DISCRETE SEMICONDUCTORS Apr 23. Product specification Supersedes data of 1997 Apr 16
zákaznická linka: 840 50 60 70 DISCRETE SEMICONDUCTORS DATA SHEET book, halfpage M3D186 Supersedes data of 1997 Apr 16 1999 Apr 23 str 1 Dodavatel: GM electronic, spol. s r.o., Křižíkova 77, 186 00 Praha
Informace o písemných přijímacích zkouškách. Doktorské studijní programy Matematika
Informace o písemných přijímacích zkouškách (úplné zadání zkušebních otázek či příkladů, které jsou součástí přijímací zkoušky nebo její části, a u otázek s výběrem odpovědi správné řešení) Doktorské studijní
Dynamic programming. Optimal binary search tree
The complexity of different algorithms varies: O(n), Ω(n ), Θ(n log (n)), Dynamic programming Optimal binary search tree Různé algoritmy mají různou složitost: O(n), Ω(n ), Θ(n log (n)), The complexity
Vliv metody vyšetřování tvaru brusného kotouče na výslednou přesnost obrobku
Vliv metody vyšetřování tvaru brusného kotouče na výslednou přesnost obrobku Aneta Milsimerová Fakulta strojní, Západočeská univerzita Plzeň, 306 14 Plzeň. Česká republika. E-mail: anetam@kto.zcu.cz Hlavním
Obrábění robotem se zpětnovazební tuhostí
Obrábění robotem se zpětnovazební tuhostí Odbor mechaniky a mechatroniky ČVUT v Praze, Fakulta strojní Student: Yaron Sela Vedoucí: Prof. Ing. Michael Valášek, DrSc Úvod Motivace Obráběcí stroj a důležitost
Číslo projektu: CZ.1.07/1.5.00/34.0036 Název projektu: Inovace a individualizace výuky
Číslo projektu: CZ.1.07/1.5.00/34.0036 Název projektu: Inovace a individualizace výuky Autor: Mgr. Libuše Matulová Název materiálu: Education Označení materiálu: VY_32_INOVACE_MAT27 Datum vytvoření: 10.10.2013
Department of Mathematical Analysis and Applications of Mathematics Faculty of Science, Palacký University Olomouc Czech Republic
ROBUST 13. září 2016 regression regresních modelů Categorical Continuous - explanatory, Eva Fišerová Department of Mathematical Analysis and Applications of Mathematics Faculty of Science, Palacký University
Case Study Czech Republic Use of context data for different evaluation activities
Case Study Czech Republic Use of context data for different evaluation activities Good practice workshop on Choosing and using context indicators in Rural Development, Lisbon, 15 16 November 2012 CONTENT
Mechanika Teplice, výrobní družstvo, závod Děčín TACHOGRAFY. Číslo Servisní Informace Mechanika: 5-2013
Mechanika Teplice, výrobní družstvo, závod Děčín TACHOGRAFY Servisní Informace Datum vydání: 20.2.2013 Určeno pro : AMS, registrované subj.pro montáž st.měř. Na základě SI VDO č./datum: Není Mechanika
Energy vstupuje na trh veterinárních produktů Energy enters the market of veterinary products
Energy news2 1 Energy vstupuje na trh veterinárních produktů Energy enters the market of veterinary products Doposud jste Energy znali jako výrobce a dodavatele humánních přírodních doplňků stravy a kosmetiky.
Database systems. Normal forms
Database systems Normal forms An example of a bad model SSN Surnam OfficeNo City Street No ZIP Region President_of_ Region 1001 Novák 238 Liteň Hlavní 10 26727 Středočeský Rath 1001 Novák 238 Bystřice
Progressive loyalty V1.0. Copyright 2017 TALENTHUT
Progressive loyalty Copyright 2017 TALENTHUT www.talenthut.io 1. Welcome The Progressive Loyalty Siberian CMS module will allow you to launch a loyalty program and reward your customers as they buy from
A Note on Generation of Sequences of Pseudorandom Numbers with Prescribed Autocorrelation Coefficients
KYBERNETIKA VOLUME 8 (1972), NUMBER 6 A Note on Generation of Sequences of Pseudorandom Numbers with Prescribed Autocorrelation Coefficients JAROSLAV KRAL In many applications (for example if the effect
Theme 6. Money Grammar: word order; questions
Theme 6 Money Grammar: word order; questions Čas potřebný k prostudování učiva lekce: 8 vyučujících hodin Čas potřebný k ověření učiva lekce: 45 minut KLÍNSKÝ P., MÜNCH O., CHROMÁ D., Ekonomika, EDUKO
PAINTING SCHEMES CATALOGUE 2012
Evektor-Aerotechnik a.s., Letecká č.p. 84, 686 04 Kunovice, Czech Republic Phone: +40 57 57 Fax: +40 57 57 90 E-mail: sales@evektor.cz Web site: www.evektoraircraft.com PAINTING SCHEMES CATALOGUE 0 Painting
Dynamic Signals. Ananda V. Mysore SJSU
Dynamic Signals Ananda V. Mysore SJSU Static vs. Dynamic Signals In principle, all signals are dynamic; they do not have a perfectly constant value over time. Static signals are those for which changes
By David Cameron VE7LTD
By David Cameron VE7LTD Introduction to Speaker RF Cavity Filter Types Why Does a Repeater Need a Duplexer Types of Duplexers Hybrid Pass/Reject Duplexer Detail Finding a Duplexer for Ham Use Questions?
Just write down your most recent and important education. Remember that sometimes less is more some people may be considered overqualified.
CURRICULUM VITAE - EDUCATION Jindřich Bláha Výukový materiál zpracován v rámci projektu EU peníze školám Autorem materiálu a všech jeho částí, není-li uvedeno jinak, je Bc. Jindřich Bláha. Dostupné z Metodického
Fytomineral. Inovace Innovations. Energy News 04/2008
Energy News 4 Inovace Innovations 1 Fytomineral Tímto Vám sdělujeme, že již byly vybrány a objednány nové lahve a uzávěry na produkt Fytomineral, které by měly předejít únikům tekutiny při přepravě. První
SPECIAL THEORY OF RELATIVITY
SPECIAL THEORY OF RELATIVITY 1. Basi information author Albert Einstein phenomena obsered when TWO frames of referene moe relatie to eah other with speed lose to the speed of light 1905 - speial theory
Chapter 7: Process Synchronization
Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors Synchronization in Solaris
PC/104, PC/104-Plus. 196 ept GmbH I Tel. +49 (0) / I Fax +49 (0) / I I
E L E C T R O N I C C O N N E C T O R S 196 ept GmbH I Tel. +49 (0) 88 61 / 25 01 0 I Fax +49 (0) 88 61 / 55 07 I E-Mail sales@ept.de I www.ept.de Contents Introduction 198 Overview 199 The Standard 200
Stojan pro vrtačku plošných spojů
Střední škola průmyslová a hotelová Uherské Hradiště Kollárova 617, Uherské Hradiště Stojan pro vrtačku plošných spojů Závěrečný projekt Autor práce: Koutný Radim Lukáš Martin Janoštík Václav Vedoucí projektu:
Klepnutím lze upravit styl Click to edit Master title style předlohy nadpisů.
nadpisu. Case Study Environmental Controlling level Control Fifth level Implementation Policy and goals Organisation Documentation Information Mass and energy balances Analysis Planning of measures 1 1
On large rigid sets of monounary algebras. D. Jakubíková-Studenovská P. J. Šafárik University, Košice, Slovakia
On large rigid sets of monounary algebras D. Jakubíková-Studenovská P. J. Šafárik University, Košice, Slovakia coauthor G. Czédli, University of Szeged, Hungary The 54st Summer School on General Algebra
Dynamic Development of Vocabulary Richness of Text. Miroslav Kubát & Radek Čech University of Ostrava Czech Republic
Dynamic Development of Vocabulary Richness of Text Miroslav Kubát & Radek Čech University of Ostrava Czech Republic Aim To analyze a dynamic development of vocabulary richness from a methodological point
SPECIFICATION FOR ALDER LED
SPECIFICATION FOR ALDER LED MODEL:AS-D75xxyy-C2LZ-H1-E 1 / 13 Absolute Maximum Ratings (Ta = 25 C) Parameter Symbol Absolute maximum Rating Unit Peak Forward Current I FP 500 ma Forward Current(DC) IF
Present Perfect x Past Simple Předpřítomný čas x Minulý čas Pracovní list
VY_32_INOVACE_AJ_133 Present Perfect x Past Simple Předpřítomný čas x Minulý čas Pracovní list PhDr. Zuzana Žantovská Období vytvoření: květen 2013 Ročník: 1. 4. ročník SŠ Tematická oblast: Gramatika slovesa
EXACT DS OFFICE. The best lens for office work
EXACT DS The best lens for office work EXACT DS When Your Glasses Are Not Enough Lenses with only a reading area provide clear vision of objects located close up, while progressive lenses only provide
GENERAL INFORMATION MATCH: ALSA PRO ARENA MASTERS DATE: TIME SCHEDULE:
GENERAL INFORMATION MATCH: ALSA PRO ARENA MASTERS DATE: 22.9. - 23.9.2018 TIME SCHEDULE: Mainmatch 1 - Saturday 22.9. registration: 22.9.2018-9.00h first shot: 22.9.2018-10.00h Mainmatch 2 - Sunday 23.9.
Příručka ke směrnici 89/106/EHS o stavebních výrobcích / Příloha III - Rozhodnutí Komise
Příručka ke směrnici 89/106/EHS o stavebních výrobcích / Příloha III - Rozhodnutí Komise ROZHODNUTÍ KOMISE ze dne 27. června 1997 o postupu prokazování shody stavebních výrobků ve smyslu čl. 20 odst. 2
Next line show use of paragraf symbol. It should be kept with the following number. Jak může státní zástupce věc odložit zmiňuje 159a.
1 Bad line breaks The follwing text has prepostions O and k at end of line which is incorrect according to Czech language typography standards: Mezi oblíbené dětské pohádky patří pohádky O Palečkovi, Alenka
CHAIN TRANSMISSIONS AND WHEELS
Second School Year CHAIN TRANSMISSIONS AND WHEELS A. Chain transmissions We can use chain transmissions for the transfer and change of rotation motion and the torsional moment. They transfer forces from
Škola: Střední škola obchodní, České Budějovice, Husova 9. Inovace a zkvalitnění výuky prostřednictvím ICT
Škola: Střední škola obchodní, České Budějovice, Husova 9 Projekt MŠMT ČR: EU PENÍZE ŠKOLÁM Číslo projektu: CZ.1.07/1.5.00/34.0536 Název projektu školy: Výuka s ICT na SŠ obchodní České Budějovice Šablona
Návrh a implementace algoritmů pro adaptivní řízení průmyslových robotů
Návrh a implementace algoritmů pro adaptivní řízení průmyslových robotů Design and implementation of algorithms for adaptive control of stationary robots Marcel Vytečka 1, Karel Zídek 2 Abstrakt Článek
Production: The Case of One Producer
Production: The Case of One Producer Economics II: Microeconomics VŠE Praha November 2009 Aslanyan (VŠE Praha) Monopoly 11/09 1 / 27 Microeconomics Consumers: Firms: People. Households. Optimisation Now
CZ.1.07/1.5.00/ Zefektivnění výuky prostřednictvím ICT technologií III/2 - Inovace a zkvalitnění výuky prostřednictvím ICT
Autor: Sylva Máčalová Tematický celek : Gramatika Cílová skupina : mírně pokročilý - pokročilý Anotace Materiál má podobu pracovního listu, který obsahuje cvičení, pomocí nichž si žáci procvičí rozdíly
EU peníze středním školám digitální učební materiál
EU peníze středním školám digitální učební materiál Číslo projektu: Číslo a název šablony klíčové aktivity: Tematická oblast, název DUMu: Autor: CZ.1.07/1.5.00/34.0515 III/2 Inovace a zkvalitnění výuky
CZ.1.07/1.5.00/
Projekt: Příjemce: Digitální učební materiály ve škole, registrační číslo projektu CZ.1.07/1.5.00/34.0527 Střední zdravotnická škola a Vyšší odborná škola zdravotnická, Husova 3, 371 60 České Budějovice
The Military Technical Institute
The Military Technical Institute The security is essential, the security is a challenge Jitka Čapková Petr Kozák Vojenský technický ústav, s.p. The Czech Republic security legislation applicable to CIS:
CZ.1.07/1.5.00/34.0527
Projekt: Příjemce: Digitální učební materiály ve škole, registrační číslo projektu CZ.1.07/1.5.00/34.0527 Střední zdravotnická škola a Vyšší odborná škola zdravotnická, Husova 3, 371 60 České Budějovice
Caroline Glendinning Jenni Brooks Kate Gridley. Social Policy Research Unit University of York
Caroline Glendinning Jenni Brooks Kate Gridley Social Policy Research Unit University of York Growing numbers of people with complex and severe disabilities Henwood and Hudson (2009) for CSCI: are the
VŠEOBECNÁ TÉMATA PRO SOU Mgr. Dita Hejlová
VŠEOBECNÁ TÉMATA PRO SOU Mgr. Dita Hejlová VZDĚLÁVÁNÍ V ČR VY_32_INOVACE_AH_3_03 OPVK 1.5 EU peníze středním školám CZ.1.07/1.500/34.0116 Modernizace výuky na učilišti Název školy Název šablony Předmět
Příručka ke směrnici 89/106/EHS o stavebních výrobcích / Příloha III - Rozhodnutí Komise
Příručka ke směrnici 89/106/EHS o stavebních výrobcích / Příloha III - Rozhodnutí Komise ROZHODNUTÍ KOMISE ze dne 14. července 1997 o postupu prokazování shody stavebních výrobků ve smyslu čl. 20 odst.
SEZNAM PŘÍLOH. Příloha 1 Dotazník Tartu, Estonsko (anglická verze) Příloha 2 Dotazník Praha, ČR (česká verze)... 91
SEZNAM PŘÍLOH Příloha 1 Dotazník Tartu, Estonsko (anglická verze)... 90 Příloha 2 Dotazník Praha, ČR (česká verze)... 91 Příloha 3 Emailové dotazy, vedení fakult TÜ... 92 Příloha 4 Emailové dotazy na vedení
WYSIWYG EDITOR PRO XML FORM
WYSIWYG EDITOR PRO XML FORM Ing. Tran Thanh Huan, Ing. Nguyen Ba Nghien, Doc. Ing. Josef Kokeš, CSc Abstract: In this paper, we introduce the WYSIWYG editor pro XML Form. We also show how to create a form
Příručka ke směrnici 89/106/EHS o stavebních výrobcích / Příloha III - Rozhodnutí Komise
Příručka ke směrnici 89/106/EHS o stavebních výrobcích / Příloha III - Rozhodnutí Komise ROZHODNUTÍ KOMISE ze dne 17. února 1997 o postupu prokazování shody stavebních výrobků ve smyslu čl. 20 odst. 2
Project Life-Cycle Data Management
Project Life-Cycle Data Management 1 Contend UJV Introduction Problem definition Input condition Proposed solution Reference Conclusion 2 UJV introduction Research, design and engineering company 1000
2011 Jan Janoušek BI-PJP. Evropský sociální fond Praha & EU: Investujeme do vaší budoucnosti
PROGRAMOVACÍ JAZYKY A PŘEKLADAČE TRANSFORMACE GRAMATIK NA LL(1) GRAMATIKU. TABULKA SYMBOLŮ. VNITŘNÍ REPREZENTACE: AST. JAZYK ZÁSOBNÍKOVÉHO POČÍTAČE. RUNTIME PROSTŘEDÍ. 2011 Jan Janoušek BI-PJP Evropský
Instrukce: Cvičný test má celkem 3 části, čas určený pro tyto části je 20 minut. 1. Reading = 6 bodů 2. Use of English = 14 bodů 3.
Vážení studenti, na následujících stranách si můžete otestovat svou znalost angličtiny a orientačně zjistit, kolik bodů za jazykové kompetence byste získali v přijímacím řízení. Maximální počet bodů je
DYNAMICS - Force effect in time and in space - Work and energy
Název projektu: Automatizace výrobních procesů ve strojírenství a řemeslech Registrační číslo: CZ.1.07/1.1.30/01.0038 Příjemce: SPŠ strojnická a SOŠ profesora Švejcara Plzeň, Klatovská 109 Tento projekt
AIC ČESKÁ REPUBLIKA CZECH REPUBLIC
ČESKÁ REPUBLIKA CZECH REPUBLIC ŘÍZENÍ LETOVÉHO PROVOZU ČR, s.p. Letecká informační služba AIR NAVIGATION SERVICES OF THE C.R. Aeronautical Information Service Navigační 787 252 61 Jeneč A 1/14 20 FEB +420
CZ.1.07/1.5.00/
Projekt: Příjemce: Digitální učební materiály ve škole, registrační číslo projektu CZ.1.07/1.5.00/34.0527 Střední zdravotnická škola a Vyšší odborná škola zdravotnická, Husova 3, 371 60 České Budějovice
Výukový materiál zpracovaný v rámci projektu EU peníze do škol. illness, a text
Výukový materiál zpracovaný v rámci projektu EU peníze do škol ZŠ Litoměřice, Ladova Ladova 5 412 01 Litoměřice www.zsladovaltm.cz vedeni@zsladovaltm.cz Pořadové číslo projektu: CZ.1.07/1.4.00/21.0948
GENERAL INFORMATION MATCH: ALSA PRO HOT SHOTS 2018 DATE:
GENERAL INFORMATION MATCH: ALSA PRO HOT SHOTS 2018 DATE: 7.7. - 8.7.2018 TIME SCHEDULE: Prematch - Friday registration: 6.7.2018-10.00h first shot: 6.7.2018-11.00h Mainmatch 1 - Saturday registration:
Střední průmyslová škola strojnická Olomouc, tř.17. listopadu 49
Střední průmyslová škola strojnická Olomouc, tř.17. listopadu 49 Výukový materiál zpracovaný v rámci projektu Výuka moderně Registrační číslo projektu: CZ.1.07/1.5.00/34.0205 Šablona: III/2 Anglický jazyk
Geometry of image formation
eometry of image formation Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Last update: July 4, 2008 Talk Outline Pinhole
Effect of temperature. transport properties J. FOŘT, Z. PAVLÍK, J. ŽUMÁR,, M. PAVLÍKOVA & R. ČERNÝ Č CTU PRAGUE, CZECH REPUBLIC
Effect of temperature on water vapour transport properties J. FOŘT, Z. PAVLÍK, J. ŽUMÁR,, M. PAVLÍKOVA & R. ČERNÝ Č CTU PRAGUE, CZECH REPUBLIC Outline Introduction motivation, water vapour transport Experimental
dat 2017 Dostupný z Licence Creative Commons Uveďte autora-zachovejte licenci 4.0 Mezinárodní
Interní pravidla pro zacházení s osobními údaji při archivaci a sdílení výzkumných dat Koščík, Michal 2017 Dostupný z http://www.nusl.cz/ntk/nusl-367303 Dílo je chráněno podle autorského zákona č. 121/2000
USER'S MANUAL FAN MOTOR DRIVER FMD-02
USER'S MANUAL FAN MOTOR DRIVER FMD-02 IMPORTANT NOTE: Read this manual carefully before installing or operating your new air conditioning unit. Make sure to save this manual for future reference. FMD Module
Tento materiál byl vytvořen v rámci projektu Operačního programu Vzdělávání pro konkurenceschopnost.
Tento materiál byl vytvořen v rámci projektu Operačního programu Vzdělávání pro konkurenceschopnost. Projekt MŠMT ČR Číslo projektu Název projektu školy Klíčová aktivita III/2 EU PENÍZE ŠKOLÁM CZ.1.07/1.4.00/21.2146
CZ.1.07/1.5.00/
Projekt: Příjemce: Digitální učební materiály ve škole, registrační číslo projektu CZ.1.07/1.5.00/34.0527 Střední zdravotnická škola a Vyšší odborná škola zdravotnická, Husova 3, 371 60 České Budějovice
IPR v H2020. Matěj Myška myska@ctt.muni.cz
IPR v H2020 Matěj Myška myska@ctt.muni.cz Zdroje [1] KRATĚNOVÁ, J. a J. Kotouček. Duševní vlastnictví v projektech H2020. Technologické centrum AV ČR, Edice Vademecum H2020, 2015. Dostupné i online: http://www.tc.cz/cs/publikace/publikace/seznampublikaci/dusevni-vlastnictvi-v-projektech-horizontu-2020
Radiova meteoricka detekc nı stanice RMDS01A
Radiova meteoricka detekc nı stanice RMDS01A Jakub Ka kona, kaklik@mlab.cz 15. u nora 2014 Abstrakt Konstrukce za kladnı ho softwarove definovane ho pr ijı macı ho syste mu pro detekci meteoru. 1 Obsah
PART 2 - SPECIAL WHOLESALE OFFER OF PLANTS SPRING 2016 NEWS MAY 2016 SUCCULENT SPECIAL WHOLESALE ASSORTMENT
PART 2 - SPECIAL WHOLESALE OFFER OF PLANTS SPRING 2016 NEWS MAY 2016 SUCCULENT SPECIAL WHOLESALE ASSORTMENT Dear Friends We will now be able to buy from us succulent plants at very good wholesale price.
Právní formy podnikání v ČR
Bankovní institut vysoká škola Praha Právní formy podnikání v ČR Bakalářská práce Prokeš Václav Leden, 2009 Bankovní institut vysoká škola Praha Katedra Bankovnictví Právní formy podnikání v ČR Bakalářská
SGM. Smart Grid Management THE FUTURE FOR ENERGY-EFFICIENT SMART GRIDS
WHO ARE WE? a company specializing in software applications for smart energy grids management and innovation a multidisciplinary team of experienced professionals from practice and from Czech technical
Čtvrtý Pentagram The fourth Pentagram
Energy News 4 1 Čtvrtý Pentagram The fourth Pentagram Na jaře příštího roku nabídneme našim zákazníkům již čtvrtý Pentagram a to Pentagram šamponů. K zavedení tohoto Pentagramu jsme se rozhodli na základě
PRODEJNÍ EAUKCE A JEJICH ROSTOUCÍ SEX-APPEAL SELLING EAUCTIONS AND THEIR GROWING APPEAL
PRODEJNÍ EAUKCE A JEJICH ROSTOUCÍ SEX-APPEAL SELLING EAUCTIONS AND THEIR GROWING APPEAL Ing. Jan HAVLÍK, MPA tajemník Městského úřadu Žďár nad Sázavou Chief Executive Municipality of Žďár nad Sázavou CO
Zubní pasty v pozměněném složení a novém designu
Energy news4 Energy News 04/2010 Inovace 1 Zubní pasty v pozměněném složení a novém designu Od října tohoto roku se začnete setkávat s našimi zubními pastami v pozměněném složení a ve zcela novém designu.
Příručka ke směrnici 89/106/EHS o stavebních výrobcích / Příloha III - Rozhodnutí Komise
Příručka ke směrnici 89/106/EHS o stavebních výrobcích / Příloha III - Rozhodnutí Komise ROZHODNUTÍ KOMISE ze dne 17. února 1997 o postupu prokazování shody stavebních výrobků ve smyslu čl. 20 odst. 2
Research infrastructure in the rhythm of BLUES. More time and money for entrepreneurs
Research infrastructure in the rhythm of BLUES More time and money for entrepreneurs 1 I. What is it? II. How does it work? III. References Where to find out more? IV. What is it good for? 2 I. What is
Czech Technical University in Prague DOCTORAL THESIS
Czech Technical University in Prague Faculty of Nuclear Sciences and Physical Engineering DOCTORAL THESIS CERN-THESIS-2015-137 15/10/2015 Search for B! µ + µ Decays with the Full Run I Data of The ATLAS