This thesis deals with learning and coordination in large and distributed systems. We focus on settings that show an interesting and frequently observable structure. Namely, devices (agents, hereafter) are often confronted with a sequence of different-probably, but not necessarily comparable-situations. The agents have to solve a common task and, thus, have to learn a good coordinated behavior for each situation. When a new setting occurs, old strategies might either become useless or establish a good basis for further adaption, depending on the similarity of the previous and the new situation. Models for these problems quickly become complex and introduce research questions on their own. Hence, to focus on the learning process, we will deal with simple sequences of stateless games. Each game is played repeatedly for a certain number of iterations, which the agents do not know in advance, before a new game occurs. We develop a model, called sequential stage games (SSG), that formalizes such problems, and establish some required foundations. Then, we propose Distributed Stateless Learning (DSL), which is a multiagent reinforcement learning approach for cooperative SSGs. To speed up learning in systems with thousands of agents, we also develop several coordination strategies. These strategies coordinate the agents' action choices, e.g., using communication or by storing learned knowledge on so-called storage media in the environment. We provide a careful theoretical analysis of our approach and prove its convergence to (near-)optimal solutions, if each game is played sufficiently long. Furthermore, we show that DSL enables learning under agent-individual noised reward perceptions. Our theoretical results are supported by empirical analyses. To summarize, we provided first insights into learning and coordination in sequences of games and developed efficient approaches for the considered scenarios.