2 // Replicated state machine implementation with a primary and several
3 // backups. The primary receives requests, assigns each a view stamp (a
4 // vid, and a sequence number) in the order of reception, and forwards
5 // them to all backups. A backup executes requests in the order that
6 // the primary stamps them and replies with an OK to the primary. The
7 // primary executes the request after it receives OKs from all backups,
8 // and sends the reply back to the client.
10 // The config module will tell the RSM about a new view. If the
11 // primary in the previous view is a member of the new view, then it
12 // will stay the primary. Otherwise, the smallest numbered node of
13 // the previous view will be the new primary. In either case, the new
14 // primary will be a node from the previous view. The configuration
15 // module constructs the sequence of views for the RSM and the RSM
16 // ensures there will be always one primary, who was a member of the
19 // When a new node starts, the recovery thread is in charge of joining
20 // the RSM. It will collect the internal RSM state from the primary;
21 // the primary asks the config module to add the new node and returns
22 // to the joining the internal RSM state (e.g., paxos log). Since
23 // there is only one primary, all joins happen in well-defined total
26 // The recovery thread also runs during a view change (e.g, when a node
27 // has failed). After a failure some of the backups could have
28 // processed a request that the primary has not, but those results are
29 // not visible to clients (since the primary responds). If the
30 // primary of the previous view is in the current view, then it will
31 // be the primary and its state is authoritive: the backups download
32 // from the primary the current state. A primary waits until all
33 // backups have downloaded the state. Once the RSM is in sync, the
34 // primary accepts requests again from clients. If one of the backups
35 // is the new primary, then its state is authoritative. In either
36 // scenario, the next view uses a node as primary that has the state
37 // resulting from processing all acknowledged client requests.
38 // Therefore, if the nodes sync up before processing the next request,
39 // the next view will have the correct state.
41 // While the RSM in a view change (i.e., a node has failed, a new view
42 // has been formed, but the sync hasn't completed), another failure
43 // could happen, which complicates a view change. During syncing the
44 // primary or backups can timeout, and initiate another Paxos round.
45 // There are 2 variables that RSM uses to keep track in what state it
47 // - inviewchange: a node has failed and the RSM is performing a view change
48 // - insync: this node is syncing its state
50 // If inviewchange is false and a node is the primary, then it can
51 // process client requests. If it is true, clients are told to retry
52 // later again. While inviewchange is true, the RSM may go through several
53 // member list changes, one by one. After a member list
54 // change completes, the nodes tries to sync. If the sync complets,
55 // the view change completes (and inviewchange is set to false). If
56 // the sync fails, the node may start another member list change
57 // (inviewchange = true and insync = false).
59 // The implementation should be used only with servers that run all
60 // requests run to completion; in particular, a request shouldn't
61 // block. If a request blocks, the backup won't respond to the
62 // primary, and the primary won't execute the request. A request may
63 // send an RPC to another host, but the RPC should be a one-way
64 // message to that host; the backup shouldn't do anything based on the
65 // response or execute after the response, because it is not
66 // guaranteed that all backup will receive the same response and
67 // execute in the same order.
69 // The implementation can be viewed as a layered system:
70 // RSM module ---- in charge of replication
71 // config module ---- in charge of view management
72 // Paxos module ---- in charge of running Paxos to agree on a value
74 // Each module has threads and internal locks. Furthermore, a thread
75 // may call down through the layers (e.g., to run Paxos's proposer).
76 // When Paxos's acceptor accepts a new value for an instance, a thread
77 // will invoke an upcall to inform higher layers of the new value.
78 // The rule is that a module releases its internal locks before it
79 // upcalls, but can keep its locks when calling down.
81 #include "include/rsm.h"
82 #include "include/rsm_client.h"
86 using namespace std::chrono;
88 rsm_state_transfer::~rsm_state_transfer() {}
90 rsm::rsm(const string & _first, const string & _me) : primary(_first)
92 cfg = std::make_unique<config>(_first, _me, this);
95 // Commit the first view here. We can not have acceptor::acceptor
96 // do the commit, since at that time this->cfg is not initialized
99 rsmrpc = cfg->get_rpcs();
100 rsmrpc->reg(rsm_client_protocol::invoke, &rsm::client_invoke, this);
101 rsmrpc->reg(rsm_client_protocol::members, &rsm::client_members, this);
102 rsmrpc->reg(rsm_protocol::invoke, &rsm::invoke, this);
103 rsmrpc->reg(rsm_protocol::transferreq, &rsm::transferreq, this);
104 rsmrpc->reg(rsm_protocol::transferdonereq, &rsm::transferdonereq, this);
105 rsmrpc->reg(rsm_protocol::joinreq, &rsm::joinreq, this);
107 // tester must be on different port, otherwise it may partition itself
108 testsvr.reset(new rpcs((in_port_t)std::stoi(_me) + 1));
109 testsvr->reg(rsm_test_protocol::net_repair, &rsm::test_net_repairreq, this);
110 testsvr->reg(rsm_test_protocol::breakpoint, &rsm::breakpointreq, this);
117 thread(&rsm::recovery, this).detach();
120 // The recovery thread runs this function
121 void rsm::recovery() {
126 while (!cfg->ismember(cfg->myaddr(), vid_commit)) {
127 // XXX iannucci 2013/09/15 -- I don't understand whether accessing
128 // cfg->view_id in this manner involves a race. I suspect not.
129 if (join(primary, ml)) {
131 commit_change(cfg->view_id(), ml);
134 std::this_thread::sleep_for(3000ms); // XXX make another node in cfg primary?
138 vid_insync = vid_commit;
139 LOG << "sync vid_insync " << vid_insync;
140 if (primary == cfg->myaddr()) {
141 r = sync_with_backups(ml);
143 r = sync_with_primary(ml);
147 // If there was a commited viewchange during the synchronization, restart
149 if (vid_insync != vid_commit)
153 myvs.vid = vid_commit;
155 inviewchange = false;
157 LOG << "go to sleep " << insync << " " << inviewchange;
158 recovery_cond.wait(ml);
162 bool rsm::sync_with_backups(lock & rsm_mutex_lock) {
163 rsm_mutex_lock.unlock();
165 // Make sure that the state of lock_server is stable during
166 // synchronization; otherwise, the primary's state may be more recent
167 // than replicas after the synchronization.
168 lock invoke_mutex_lock(invoke_mutex);
169 // By acquiring and releasing the invoke_mutex once, we make sure that
170 // the state of lock_server will not be changed until all
171 // replicas are synchronized. The reason is that client_invoke arrives
172 // after this point of time will see inviewchange == true, and returns
175 rsm_mutex_lock.lock();
176 // Start accepting synchronization request (statetransferreq) now!
178 backups = cfg->get_view(vid_insync);
179 backups.erase(std::find(backups.begin(), backups.end(), cfg->myaddr()));
180 LOG << "backups " << backups;
181 sync_cond.wait(rsm_mutex_lock);
187 bool rsm::sync_with_primary(lock & rsm_mutex_lock) {
188 // Remember the primary of vid_insync
190 while (vid_insync == vid_commit) {
191 if (statetransfer(m, rsm_mutex_lock))
194 return statetransferdone(m, rsm_mutex_lock);
199 // Call to transfer state from m to the local node.
200 // Assumes that rsm_mutex is already held.
202 bool rsm::statetransfer(const string & m, lock & rsm_mutex_lock)
204 rsm_protocol::transferres r;
206 LOG << "contact " << m << " w. my last_myvs(" << last_myvs.vid << "," << last_myvs.seqno << ")";
209 rsm_mutex_lock.unlock();
210 cl = rpcc::bind_cached(m);
212 ret = cl->call_timeout(rsm_protocol::transferreq, 100ms,
213 r, cfg->myaddr(), last_myvs, vid_insync);
215 rsm_mutex_lock.lock();
217 if (cl == 0 || ret != rsm_protocol::OK) {
218 LOG << "couldn't reach " << m << " " << std::hex << cl << " " << std::dec << ret;
221 if (stf && last_myvs != r.last) {
222 stf->unmarshall_state(r.state);
225 LOG << "transfer from " << m << " success, vs(" << last_myvs.vid << "," << last_myvs.seqno << ")";
229 bool rsm::statetransferdone(const string & m, lock & rsm_mutex_lock) {
230 rsm_mutex_lock.unlock();
232 if (auto cl = rpcc::bind_cached(m)) {
234 auto ret = (rsm_protocol::status)cl->call(rsm_protocol::transferdonereq, r, cfg->myaddr(), vid_insync);
235 done = (ret == rsm_protocol::OK);
237 rsm_mutex_lock.lock();
242 bool rsm::join(const string & m, lock & rsm_mutex_lock) {
246 LOG << "contacting " << m << " mylast (" << last_myvs.vid << "," << last_myvs.seqno << ")";
248 rsm_mutex_lock.unlock();
249 auto cl = rpcc::bind_cached(m);
251 ret = cl->call_timeout(rsm_protocol::joinreq, 12000ms, log, cfg->myaddr(), last_myvs);
252 rsm_mutex_lock.lock();
254 if (cl == 0 || ret != rsm_protocol::OK) {
255 LOG << "couldn't reach " << m << " " << std::hex << cl << " " << std::dec << ret;
258 LOG << "succeeded " << log;
264 // Config informs rsm whenever it has successfully
265 // completed a view change
267 void rsm::commit_change(unsigned vid) {
269 commit_change(vid, ml);
270 if (cfg->ismember(cfg->myaddr(), vid_commit))
274 void rsm::commit_change(unsigned vid, lock &) {
275 if (vid <= vid_commit)
277 LOG << "new view (" << vid << ") last vs (" << last_myvs.vid << ","
278 << last_myvs.seqno << ") " << primary << " insync " << insync;
282 recovery_cond.notify_one();
283 sync_cond.notify_one();
284 if (cfg->ismember(cfg->myaddr(), vid_commit))
289 void rsm::execute(rpc_protocol::proc_id_t procno, const string & req, string & r) {
291 handler *h = procs[procno];
294 auto ret = (rsm_protocol::status)(*h)(unmarshall(req), rep);
295 r = marshall(ret, rep);
298 static void logHexString(locked_ostream && log, const string & s) {
299 log << std::setfill('0') << std::setw(2) << std::hex;
300 for (size_t i=0; i<s.size(); i++)
301 log << (unsigned int)(unsigned char)s[i];
305 // Clients call client_invoke to invoke a procedure on the replicated state
306 // machine: the primary receives the request, assigns it a sequence
307 // number, and invokes it on all members of the replicated state
310 rsm_client_protocol::status rsm::client_invoke(string & r, rpc_protocol::proc_id_t procno, const string & req) {
311 LOG << "invoke procno 0x" << std::hex << procno;
312 lock ml(invoke_mutex);
318 LOG << "Checking for inviewchange";
320 return rsm_client_protocol::BUSY;
321 LOG << "Checking for primacy";
322 myaddr = cfg->myaddr();
323 if (primary != myaddr)
324 return rsm_client_protocol::NOTPRIMARY;
325 LOG << "Assigning a viewstamp";
326 m = cfg->get_view(vid_commit);
327 // assign the RPC the next viewstamp number
332 // send an invoke RPC to all slaves in the current view with a timeout
333 LOG << "Invoking slaves";
334 for (auto & mm : m) {
336 // if invoke on slave fails, return rsm_client_protocol::BUSY
337 LOG << "Sending invoke to " << mm;
338 auto cl = rpcc::bind_cached(mm);
340 return rsm_client_protocol::BUSY;
342 auto ret = (rsm_protocol::status)cl->call_timeout(rsm_protocol::invoke, 100ms, ignored_rval, procno, vs, req);
343 LOG << "Invoke returned " << ret;
344 if (ret != rsm_protocol::OK)
345 return rsm_client_protocol::BUSY;
347 lock rsm_mutex_lock(rsm_mutex);
348 partition1(rsm_mutex_lock);
351 logHexString(LOG, req);
352 execute(procno, req, r);
353 logHexString(LOG, r);
355 return rsm_client_protocol::OK;
359 // The primary calls the internal invoke at each member of the
360 // replicated state machine
362 // the replica must execute requests in order (with no gaps)
363 // according to requests' seqno
365 rsm_protocol::status rsm::invoke(int &, rpc_protocol::proc_id_t proc, viewstamp vs, const string & req) {
366 LOG << "invoke procno 0x" << std::hex << proc;
367 lock ml(invoke_mutex);
371 // check if !inviewchange
372 LOG << "Checking for view change";
374 return rsm_protocol::ERR;
376 LOG << "Checking for slave status";
377 myaddr = cfg->myaddr();
378 if (primary == myaddr)
379 return rsm_protocol::ERR;
380 vector<string> m = cfg->get_view(vid_commit);
381 if (std::find(m.begin(), m.end(), myaddr) == m.end())
382 return rsm_protocol::ERR;
383 // check sequence number
384 LOG << "Checking sequence number";
386 return rsm_protocol::ERR;
390 execute(proc, req, r);
393 return rsm_protocol::OK;
397 // RPC handler: Send back the local node's state to the caller
399 rsm_protocol::status rsm::transferreq(rsm_protocol::transferres & r, const string & src,
400 viewstamp last, unsigned vid) {
402 LOG << "transferreq from " << src << " (" << last.vid << "," << last.seqno << ") vs ("
403 << last_myvs.vid << "," << last_myvs.seqno << ")";
404 if (!insync || vid != vid_insync)
405 return rsm_protocol::BUSY;
406 if (stf && last != last_myvs)
407 r.state = stf->marshall_state();
409 return rsm_protocol::OK;
413 // RPC handler: Inform the local node (the primary) that node m has synchronized
416 rsm_protocol::status rsm::transferdonereq(int &, const string & m, unsigned vid) {
418 if (!insync || vid != vid_insync)
419 return rsm_protocol::BUSY;
420 backups.erase(std::find(backups.begin(), backups.end(), m));
422 sync_cond.notify_one();
423 return rsm_protocol::OK;
426 // a node that wants to join an RSM as a server sends a
427 // joinreq to the RSM's current primary; this is the
428 // handler for that RPC.
429 rsm_protocol::status rsm::joinreq(string & log, const string & m, viewstamp last) {
430 auto ret = rsm_protocol::OK;
433 LOG << "join request from " << m << "; last=(" << last.vid << "," << last.seqno << "), mylast=("
434 << last_myvs.vid << "," << last_myvs.seqno << ")";
435 if (cfg->ismember(m, vid_commit)) {
436 LOG << m << " is still a member -- nothing to do";
438 } else if (cfg->myaddr() != primary) {
439 LOG << "but I, " << cfg->myaddr() << ", am not the primary, " << primary << "!";
440 ret = rsm_protocol::BUSY;
442 // We cache vid_commit to avoid adding m to a view which already contains
443 // m due to race condition
444 LOG << "calling down to config layer";
445 unsigned vid_cache = vid_commit;
449 succ = cfg->add(m, vid_cache);
452 if (cfg->ismember(m, cfg->view_id())) {
454 LOG << "ret " << ret << " log " << log;
456 LOG << "failed; proposer couldn't add " << succ;
457 ret = rsm_protocol::BUSY;
464 // RPC handler: Responds with the list of known nodes for fall-back on a
467 rsm_client_protocol::status rsm::client_members(vector<string> & r, int) {
469 vector<string> m = cfg->get_view(vid_commit);
470 m.push_back(primary);
472 LOG << "return " << m << " m " << primary;
473 return rsm_client_protocol::OK;
476 // if primary is member of new view, that node is primary
477 // otherwise, the lowest number node of the previous view.
478 // caller should hold rsm_mutex
479 void rsm::set_primary(unsigned vid) {
480 vector<string> c = cfg->get_view(vid), p = cfg->get_view(vid - 1);
481 VERIFY (c.size() > 0);
483 if (isamember(primary,c)) {
484 LOG << "primary stays " << primary;
488 VERIFY(p.size() > 0);
489 for (auto & pp : p) {
490 if (isamember(pp, c)) {
492 LOG << "primary is " << primary;
499 bool rsm::amiprimary() {
501 return primary == cfg->myaddr() && !inviewchange;
505 // Test RPCs -- simulate partitions and failures
507 void rsm::net_repair(bool heal, lock & rsm_mutex_lock) {
508 VERIFY(rsm_mutex_lock);
509 for (auto & mm : cfg->get_view(vid_commit)) {
510 if (mm != cfg->myaddr()) {
511 LOG << "member " << mm << " " << heal;
512 if (auto cl = rpcc::bind_cached(mm))
513 cl->set_reachable(heal);
516 rsmrpc->set_reachable(heal);
519 rsm_test_protocol::status rsm::test_net_repairreq(rsm_test_protocol::status & r, int heal) {
521 LOG << "heal " << heal << " (dopartition "
522 << dopartition << ", partitioned " << partitioned << ")";
524 net_repair(heal, ml);
528 return r = rsm_test_protocol::OK;
531 // simulate failure at breakpoint 1 and 2
533 void rsm::breakpoint(int b) {
534 if (breakpoints[b-1]) {
535 LOG << "Dying at breakpoint " << b << " in rsm!";
540 void rsm::partition1(lock & rsm_mutex_lock) {
542 net_repair(false, rsm_mutex_lock);
548 rsm_test_protocol::status rsm::breakpointreq(rsm_test_protocol::status & r, int b) {
549 r = rsm_test_protocol::OK;
551 LOG << "breakpoint " << b;
552 if (b == 1) breakpoints[1-1] = true;
553 else if (b == 2) breakpoints[2-1] = true;
554 else if (b == 3 || b == 4) cfg->breakpoint(b);
555 else r = rsm_test_protocol::ERR;