Compare commits

..

10 Commits

20 changed files with 183 additions and 404 deletions

View File

@ -2,120 +2,38 @@
Thank you for your interest in helping out the FabAccess system!
You found a bug, an exploit or a feature that doesn't work like it's documented? Please tell us
about it, see [Issues](#issues)
You found a bug, an exploit or a feature that doesn't work like it's documented? Please tell us about it, see [Issues](#issues)
You have a feature request? Great, check out the paragraph on [Feature Requests](#feature-requests)
## Issues
While we try to not have any bugs or exploits or documentation bugs we're not perfect either. Thanks
for helping us out!
While we try to not have any bugs or exploits or documentation bugs we're not perfect either. Thanks for helping us out!
We have labels that help us sort issues better, so if you know what would be the correct ones,
please tag your issue:
- `documentation` if it's an documentation issue, be it lacking docs or even worse wrong docs.
- `bug` is for software bugs, unexpected behaviour, crashes and so on.
- `exploit` for any bugs that may be used as RCE, to escalate priviledges or some-such.
Don't worry if you aren't sure about the correct labels, an issue opened with no labels is much
better than no knowing about the issue!
We have labels that help us sort issues better, so if you know what would be the correct ones, please tag your issue with one or multiple keywords. See [Labels](https://gitlab.com/fabinfra/fabaccess/bffh/-/labels) to get an overview of all keywords and their use case.
Especially for bugs and exploits, please mark your issue as "confidential" if you think it impacts
the `stable` branch. If you're not sure, mark it as confidential anyway. It's easier to publish
information than it is to un-publish information.
If you found an exploit and it's high-impact enough that you do not want to open an issue but
instead want direct contact with the developers, you can find public keys respectively fingerprints
for GPG, XMPP+OMEMO and Matrix+MegOlm in the git repository as blobs with tags assigned to them.
You can import the gpg key for dequbed either from the repository like so:
```
$ git cat-file -p keys/dequbed/gpg | gpg --import-key
```
Or from your local trusted gpg keyserver, and/or verify it using [keybase](https://keybase.io/dequbed)
This key is also used to sign the other tags so to verify them you can run e.g.
```
$ git tag -v keys/dequbed/xmpp+omemo
```
Especially for **bugs** and **exploits**, please mark your issue as "confidential" if you think it impacts the `stable` branch. If you're not sure, mark it as confidential anyway. It's easier to publish information than it is to un-publish information. You may also contact as by [mail](https://fab-access.org/impressum).
## Feature Requests
We also like new feature requests of course!
But before you open an issue in this repo for a feature request, please first check a few things:
1. Is it a feature that needs to be implemented in more than just the backend server? For example,
is it something also having a GUI-component or something that you want to be able to do via the
API? If so it's better suited over at the
[Lastenheft](https://gitlab.com/fabinfra/fabaccess_lastenheft) because that's where the required
coordination for that will end up happening
2. Who else needs that feature? Is this something super specific to your environment/application or
something that others will want too? If it's something that's relevant for more people please
also tell us that in the feature request.
3. Can you already get partway or all the way there using what's there already? If so please also
tell us what you're currently doing and what doesn't work or why you dislike your current
solution.
1. Is it a feature that needs to be implemented in more than just the backend server? For example, is it something also having a GUI-component or something that you want to be able to do via the API? If so it's better suited over at the
[Lastenheft](https://gitlab.com/fabinfra/fabaccess_lastenheft) because that's where the required coordination for that will end up happening
2. Who else needs that feature? Is this something super specific to your environment/application or something that others will want too? If it's something that's relevant for more people please also tell us that in the feature request.
3. Can you already get partway or all the way there using what's there already? If so please also tell us what you're currently doing and what doesn't work or why you dislike your current solution.
## Contributing Code
To help develop Difluoroborane you will need a Rust toolchain. I heavily recommend installing
[rustup](https://rustup.rs) even if your distribution provides a recent enough rustc, simply because
it allows to easily switch compilers between several versions of both stable and nightly. It also
allows you to download the respective stdlib crate, giving you the option of an offline reference.
To help develop Difluoroborane you will need a Rust toolchain. We heavily recommend installing [rustup](https://rustup.rs) even if your distribution provides a recent enough rustc, simply because it allows to easily switch compilers between several versions of both stable and nightly. It also allows you to download the respective stdlib crate, giving you the option of an offline reference.
We use a stable release branch / moving development workflow. This means that all *new* development
should happen on the `development` branch which is regularly merged into `stable` as releases. The
exception of course are bug- and hotfixes that can target whichever branch.
We use a stable release branch / moving development workflow. This means that all *new* development should happen on the `development` branch which is regularly merged into `stable` as releases. The exception of course are bug- and hotfixes that can target whichever branch.
If you want to add a new feature please work off the development branch. We suggest you create
yourself a feature branch, e.g. using `git switch development; git checkout -b
feature/my-cool-feature`.
Using a feature branch keeps your local `development` branch clean, making it easier to later rebase
your feature branch onto it before you open a pull/merge request.
If you want to add a new feature please work off the development branch. We suggest you create yourself a feature branch, e.g. using
When you want feedback on your current progress or are ready to have it merged upstream open a merge
request. Don't worry we don't bite! ^^
```git switch development; git checkout -b feature/my-cool-feature```
Using a feature branch keeps your local `development` branch clean, making it easier to later rebase your feature branch onto it before you open a pull/merge request.
# Development Setup
## Cross-compilation
If you want to cross-compile you need both a C-toolchain for your target
and install the Rust stdlib for said target.
As an example for the target `aarch64-unknown-linux-gnu` (64-bit ARMv8
running Linux with the glibc, e.g. a Raspberry Pi 3 or later with a 64-bit
Debian Linux installation):
1. Install C-toolchain using your distro package manager:
- On Archlinux: `pacman -S aarch64-unknown-linux-gnu-gcc`
2. Install the Rust stdlib:
- using rustup: `rustup target add aarch64-unknown-linux-gnu`
3. Configure your cargo config:
### Configuring cargo
You need to tell Cargo to use your C-toolchain. For this you need to have
a block in [your user cargo config](https://doc.rust-lang.org/cargo/reference/config.html) setting at
least the paths to the gcc as `linker` and ar as `ar`:
```toml
[target.aarch64-unknown-linux-gnu]
# You must set the gcc as linker since a lot of magic must happen here.
linker = "aarch64-linux-gnu-gcc"
ar = "aarch64-linux-gnu-ar"
```
This block should be added to your **user** cargo config (usually
`~/.cargo/config.toml`), since these values can differ between distros and
users.
To actually compile for the given triple you need to call `cargo build`
with the `--target` flag:
```
$ cargo build --release --target=aarch64-unknown-linux-gnu
```
## Tests
Sadly, still very much `// TODO:`. We're working on it! :/
When you want feedback on your current progress or are ready to have it merged upstream open a merge request. Don't worry, we don't bite! ^^

View File

@ -1,54 +0,0 @@
# Installation
Currently there are no distribution packages available.
However installation is reasonably straight-forward, since Difluoroborane compiles into a single
mostly static binary with few dependencies.
At the moment only Linux is supported. If you managed to compile Difluoroborane please open an issue
outlining your steps or add a merge request expanding this part. Thanks!
## Requirements
General requirements; scroll down for distribution-specific instructions
- GNU SASL (libgsasl).
* If you want to compile Difluoroborane from source you will potentially also need development
headers
- capnproto
- rustc stable / nightly >= 1.48
* If your distribution does not provide a recent enough rustc, [rustup](https://rustup.rs/) helps
installing a local toolchain and keeping it up to date.
###### Arch Linux:
```shell
$ pacman -S gsasl rust capnproto
```
## Compiling from source
Difluoroborane uses Cargo, so compilation boils down to:
```shell
$ cargo build --release
```
https://www.geeksforgeeks.org/how-to-install-rust-on-raspberry-pi/ can show you how to install rust on your Linux computer.
The compiled binary can then be found in `./target/release/bffhd`
### Cross-compiling
If you need to compile for a different CPU target than your own (e.g. you want
to use BFFH on a raspberry pi but compile on your desktop PC), you need to
setup a cross-compilation toolchain and configure your Cargo correctly.
[The `CONTRIBUTING.md` has a section on how to setup a cross-compilation system.](CONTRIBUTING.md#cross-compilation)
# Running bffhd
The server can be ran either using `cargo`, which will also compile the binary if necessary, or directly.
When running using `cargo` you need to pass arguments to bffh after a `--`, so
e.g. `cargo run --release -- --help` or `cargo run --release -- -c examples/bffh.toml`.
When running directly the `bffhd` binary can be copied anywhere.
A list of arguments for the server is found in the help, so `bffhd --help` or `cargo run --release -- --help`.

View File

@ -20,7 +20,7 @@ be ported to as many platforms as possible.
## Installation
See [INSTALL.md](INSTALL.md)
See [https://fab-access.org/install](https://fab-access.org/install)
## Contributing

View File

@ -32,7 +32,7 @@ pub struct PrivilegesBuf {
// i.e. "bffh.perm" is not the same as "bffհ.реrm" (Armenian 'հ':Հ and Cyrillic 'е':Е)
// See also https://util.unicode.org/UnicodeJsps/confusables.jsp
pub struct PermissionBuf {
inner: String,
pub inner: String,
}
impl PermissionBuf {
#[inline(always)]

View File

@ -131,11 +131,11 @@ pub struct Role {
/// This makes situations where different levels of access are required easier: Each higher
/// level of access sets the lower levels of access as parent, inheriting their permission; if
/// you are allowed to manage a machine you are then also allowed to use it and so on
parents: Vec<String>,
pub parents: Vec<String>,
// If a role doesn't define permissions, default to an empty Vec.
#[serde(default, skip_serializing_if = "Vec::is_empty")]
permissions: Vec<PermRule>,
pub permissions: Vec<PermRule>,
}
impl Role {

View File

@ -143,7 +143,8 @@ impl admin::Server for User {
// Only update if needed
if !target.userdata.roles.iter().any(|r| r.as_str() == rolename) {
target.userdata.roles.push(rolename.to_string());
pry!(self.session
pry!(self
.session
.users
.put_user(self.user.get_username(), &target));
}
@ -168,7 +169,8 @@ impl admin::Server for User {
// Only update if needed
if target.userdata.roles.iter().any(|r| r.as_str() == rolename) {
target.userdata.roles.retain(|r| r.as_str() != rolename);
pry!(self.session
pry!(self
.session
.users
.put_user(self.user.get_username(), &target));
}

View File

@ -5,7 +5,7 @@ use std::path::PathBuf;
use serde::{Deserialize, Serialize};
use crate::authorization::permissions::PrivilegesBuf;
use crate::authorization::permissions::{PermRule, PermissionBuf, PrivilegesBuf};
use crate::authorization::roles::Role;
use crate::capnp::{Listen, TlsListen};
use crate::logging::LogConfig;
@ -60,28 +60,13 @@ pub struct MachineDescription {
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Config {
pub spacename: String,
pub instanceurl: String,
/// A list of address/port pairs to listen on.
pub listens: Vec<Listen>,
/// Machine descriptions to load
pub machines: HashMap<String, MachineDescription>,
/// Actors to load and their configuration options
pub actors: HashMap<String, ModuleConfig>,
/// Initiators to load and their configuration options
pub initiators: HashMap<String, ModuleConfig>,
pub mqtt_url: String,
pub actor_connections: Vec<(String, String)>,
pub init_connections: Vec<(String, String)>,
pub db_path: PathBuf,
pub auditlog_path: PathBuf,
pub roles: HashMap<String, Role>,
#[serde(flatten)]
pub tlsconfig: TlsListen,
@ -94,9 +79,22 @@ pub struct Config {
#[serde(default, skip)]
pub logging: LogConfig,
pub spacename: String,
pub mqtt_url: String,
pub db_path: PathBuf,
pub auditlog_path: PathBuf,
pub instanceurl: String,
pub roles: HashMap<String, Role>,
/// Machine descriptions to load
pub machines: HashMap<String, MachineDescription>,
/// Actors to load and their configuration options
pub actors: HashMap<String, ModuleConfig>,
pub actor_connections: HashMap<String, String>,
/// Initiators to load and their configuration options
pub initiators: HashMap<String, ModuleConfig>,
pub init_connections: HashMap<String, String>,
}
impl Config {
@ -123,50 +121,136 @@ impl Default for Config {
fn default() -> Self {
let mut actors: HashMap<String, ModuleConfig> = HashMap::new();
let mut initiators: HashMap<String, ModuleConfig> = HashMap::new();
let machines = HashMap::new();
let mut roles: HashMap<String, Role> = HashMap::new();
let mut machines: HashMap<String, MachineDescription> = HashMap::new();
roles.insert(
"admin".to_string(),
Role {
parents: Vec::new(),
permissions: vec![
PermRule::Base(PermissionBuf {
inner: "bffh.users.info".to_string(),
}),
PermRule::Base(PermissionBuf {
inner: "bffh.users.manage".to_string(),
}),
PermRule::Base(PermissionBuf {
inner: "bffh.users.admin".to_string(),
}),
],
},
);
roles.insert(
"member".to_string(),
Role {
parents: Vec::new(),
permissions: vec![
PermRule::Base(PermissionBuf {
inner: "lab.some.disclose".to_string(),
}),
PermRule::Base(PermissionBuf {
inner: "lab.some.read".to_string(),
}),
PermRule::Base(PermissionBuf {
inner: "lab.some.write".to_string(),
}),
PermRule::Base(PermissionBuf {
inner: "lab.some.manage".to_string(),
}),
],
},
);
machines.insert(
"resource_a".to_string(),
MachineDescription {
name: "Resource A".to_string(),
description: Option::from("A description".to_string()),
wiki: Option::from("https://some.wiki.url".to_string()),
category: Option::from("A category".to_string()),
privs: PrivilegesBuf {
disclose: PermissionBuf {
inner: "lab.some.disclose".to_string(),
},
read: PermissionBuf {
inner: "lab.some.read".to_string(),
},
write: PermissionBuf {
inner: "lab.some.write".to_string(),
},
manage: PermissionBuf {
inner: "lab.some.manage".to_string(),
},
},
},
);
machines.insert(
"resource_b".to_string(),
MachineDescription {
name: "Resource B".to_string(),
description: Option::from("A description".to_string()),
wiki: Option::from("https://some.wiki.url".to_string()),
category: Option::from("A category".to_string()),
privs: PrivilegesBuf {
disclose: PermissionBuf {
inner: "lab.some.disclose".to_string(),
},
read: PermissionBuf {
inner: "lab.some.read".to_string(),
},
write: PermissionBuf {
inner: "lab.some.write".to_string(),
},
manage: PermissionBuf {
inner: "lab.some.manage".to_string(),
},
},
},
);
actors.insert(
"Actor".to_string(),
"actor_123".to_string(),
ModuleConfig {
module: "Shelly".to_string(),
params: HashMap::new(),
},
);
initiators.insert(
"Initiator".to_string(),
"initiator_123".to_string(),
ModuleConfig {
module: "TCP-Listen".to_string(),
module: "Process".to_string(),
params: HashMap::new(),
},
);
Config {
spacename: "fabaccess.sample.space".into(),
instanceurl: "https://fabaccess.sample.space".into(),
listens: vec![Listen {
address: "127.0.0.1".to_string(),
port: None,
}],
actors,
initiators,
machines,
mqtt_url: "tcp://localhost:1883".to_string(),
actor_connections: vec![("Testmachine".to_string(), "Actor".to_string())],
init_connections: vec![("Initiator".to_string(), "Testmachine".to_string())],
db_path: PathBuf::from("/var/lib/bffh/bffh.db"),
auditlog_path: PathBuf::from("/var/log/bffh/audit.json"),
roles: HashMap::new(),
tlsconfig: TlsListen {
certfile: PathBuf::from("/etc/bffh/certs/bffh.crt"),
keyfile: PathBuf::from("/etc/bffh/certs/bffh.key"),
..Default::default()
},
tlskeylog: None,
verbosity: 0,
logging: LogConfig::default(),
instanceurl: "".into(),
spacename: "".into(),
}
mqtt_url: "mqtt://127.0.0.1:1883".to_string(),
db_path: PathBuf::from("/var/lib/bffh/bffh.db"),
auditlog_path: PathBuf::from("/var/log/bffh/audit.json"),
roles,
machines,
actors,
actor_connections: vec!(("actor".to_string(), "actor_123".to_string()), ("machine".to_string(),"resource_a".to_string())).into_iter().collect(),
initiators,
init_connections: vec!(("initiator".to_string(), "initiator_123".to_string()), ("machine".to_string(),"resource_b".to_string())).into_iter().collect(),
}
}
}

View File

@ -2,9 +2,7 @@ use crate::initiators::dummy::Dummy;
use crate::initiators::process::Process;
use crate::resources::modules::fabaccess::Status;
use crate::session::SessionHandle;
use crate::{
AuthenticationHandle, Config, Resource, ResourcesHandle, SessionManager,
};
use crate::{AuthenticationHandle, Config, Resource, ResourcesHandle, SessionManager};
use executor::prelude::Executor;
use futures_util::ready;
use std::collections::HashMap;

View File

@ -208,9 +208,11 @@ impl Difluoroborane {
pub fn dump_db(&mut self, file: &str) -> Result<(), miette::Error> {
let users = self.users.dump_map()?;
let state = self.statedb.dump_map()?;
let dump = DatabaseDump{users, state};
let data = toml::ser::to_vec(&dump).map_err(|e| miette::Error::msg(format!("Serializing database dump failed: {}", e)))?;
std::fs::write(file, &data).map_err(|e| miette::Error::msg(format!("writing database dump failed: {}", e)))?;
let dump = DatabaseDump { users, state };
let data = toml::ser::to_vec(&dump)
.map_err(|e| miette::Error::msg(format!("Serializing database dump failed: {}", e)))?;
std::fs::write(file, &data)
.map_err(|e| miette::Error::msg(format!("writing database dump failed: {}", e)))?;
Ok(())
}
@ -236,7 +238,8 @@ impl Difluoroborane {
self.resources.clone(),
sessionmanager.clone(),
authentication.clone(),
).expect("initializing initiators failed");
)
.expect("initializing initiators failed");
// TODO 0.5: error handling. Add variant to BFFHError
actors::load(self.executor.clone(), &self.config, self.resources.clone())?;

View File

@ -90,7 +90,11 @@ impl Inner {
.unwrap()
.log(self.id.as_str(), &format!("{}", state));
if let Err(e) = res {
tracing::error!("Writing to the audit log failed for {} {}: {e}", self.id.as_str(), state);
tracing::error!(
"Writing to the audit log failed for {} {}: {e}",
self.id.as_str(),
state
);
}
self.signal.set(state);
@ -164,7 +168,9 @@ impl Resource {
fn set_state(&self, state: MachineState) {
let mut serializer = AllocSerializer::<1024>::default();
serializer.serialize_value(&state).expect("serializing a MachineState shoud be infallible");
serializer
.serialize_value(&state)
.expect("serializing a MachineState shoud be infallible");
let archived = ArchivedValue::new(serializer.into_serializer().into_inner());
self.inner.set_state(archived)
}

View File

@ -1,5 +1,5 @@
use rkyv::ser::Serializer;
use rkyv::ser::serializers::AllocSerializer;
use rkyv::ser::Serializer;
use thiserror::Error;
use crate::db;
@ -54,8 +54,7 @@ impl StateDB {
}
pub fn open_with_env(env: Arc<Environment>) -> Result<Self, StateDBError> {
let db = RawDB::open(&env, Some("state"))
.map_err(|e| StateDBError::Open(e.into()))?;
let db = RawDB::open(&env, Some("state")).map_err(|e| StateDBError::Open(e.into()))?;
Ok(Self::new(env, db))
}
@ -117,11 +116,14 @@ impl StateDB {
pub fn dump_map(&self) -> miette::Result<std::collections::HashMap<String, State>> {
let mut map = std::collections::HashMap::new();
for (key, val) in self.get_all(&self.begin_ro_txn()?)? {
let key_str = core::str::from_utf8(&key).map_err(|_e| miette::Error::msg("state key not UTF8"))?.to_string();
let val_state: State = rkyv::Deserialize::deserialize(val.as_ref(), &mut rkyv::Infallible).unwrap();
let key_str = core::str::from_utf8(&key)
.map_err(|_e| miette::Error::msg("state key not UTF8"))?
.to_string();
let val_state: State =
rkyv::Deserialize::deserialize(val.as_ref(), &mut rkyv::Infallible).unwrap();
map.insert(key_str, val_state);
}
Ok(map)
Ok(map)
}
}

View File

@ -1,5 +1,5 @@
use std::fmt::{Debug, Display, Formatter};
use std::fmt;
use std::fmt::{Debug, Display, Formatter};
use std::ops::Deref;

View File

@ -173,7 +173,7 @@ impl Users {
Ok(())
}
pub fn load_map(&mut self, dump: &HashMap<String,UserData>) -> miette::Result<()> {
pub fn load_map(&mut self, dump: &HashMap<String, UserData>) -> miette::Result<()> {
let mut txn = unsafe { self.userdb.get_rw_txn() }?;
self.userdb.clear_txn(&mut txn)?;
@ -194,7 +194,7 @@ impl Users {
}
pub fn dump_map(&self) -> miette::Result<HashMap<String, UserData>> {
return Ok(self.userdb.get_all()?)
return Ok(self.userdb.get_all()?);
}
pub fn dump_file(&self, path_str: &str, force: bool) -> miette::Result<usize> {
let path = Path::new(path_str);

View File

@ -122,9 +122,7 @@ fn main() -> miette::Result<()> {
Err(error) => error.exit(),
};
let configpath = matches
.value_of("config")
.unwrap_or("/etc/bffh/bffh.dhall");
let configpath = matches.value_of("config").unwrap_or("/etc/bffh/bffh.dhall");
// Check for the --print-default option first because we don't need to do anything else in that
// case.
@ -133,7 +131,7 @@ fn main() -> miette::Result<()> {
let encoded = serde_dhall::serialize(&config).to_string().unwrap();
// Direct writing to fd 1 is faster but also prevents any print-formatting that could
// invalidate the generated TOML
// invalidate the generated DHALL
let stdout = io::stdout();
let mut handle = stdout.lock();
handle.write_all(encoded.as_bytes()).unwrap();
@ -187,9 +185,13 @@ fn main() -> miette::Result<()> {
} else if matches.is_present("load-users") {
let bffh = Difluoroborane::new(config)?;
bffh.users.load_file(matches.value_of("load-users").unwrap())?;
bffh.users
.load_file(matches.value_of("load-users").unwrap())?;
tracing::info!("loaded users from {}", matches.value_of("load-users").unwrap());
tracing::info!(
"loaded users from {}",
matches.value_of("load-users").unwrap()
);
return Ok(());
} else {

View File

@ -1,39 +0,0 @@
strict digraph connection {
Establish [label="TCP/SCTP connection established"];
Closed [label="TCP/SCTP connection closed"];
Open;
SASL;
Authenticated;
STARTTLS;
Encrypted;
Establish -> Open [label=open];
Open -> Closed [label=close];
Open -> SASL [label=auth];
SASL -> SASL [label=step];
// Authentication fails
SASL -> Closed [label=fails];
// Authentication succeeds
SASL -> Authenticated [label=successful];
Open -> STARTTLS [label=starttls];
// TLS wrapping succeeds
STARTTLS -> Encrypted [label=successful];
// TLS wrapping fails
STARTTLS -> Closed [label=fails];
Authenticated -> SASL_TLS [label=starttls];
SASL_TLS -> Closed [label=fails];
SASL_TLS -> AuthEnc [label=successful];
Encrypted -> TLS_SASL [label=auth];
TLS_SASL -> TLS_SASL [label=step];
TLS_SASL -> Closed [label=fails];
TLS_SASL -> AuthEnc [label=successful];
// Only authenticated connections may open RPC. For "unauth", use the `Anonymous` SASL method.
AuthEnc -> RPC [label=bootstrap];
Authenticated -> RPC [label=bootstrap];
}

View File

@ -1,42 +0,0 @@
# Stream initiation
In a session there are two parties: The initiating entity and the receiving
entity. This terminology does not refer to information flow but rather to the
side opening a connection respectively the one listening for connection
attempts.
In the currently envisioned use-case the initiating entity is a) a client
(i.e. interactive or batch/automated program) trying to interact in some way or
other with a server b) a server trying to exchange / request information
with/from another server (i.e. federating). The receiving entity however is
already a server.
Additionally the amount and type of clients is likely to be more diverse and
less up to date than the servers.
Conclusions I draw from this:
- Clients are more likely to implement an outdated version of the communication
protocol.
- The place for backwards-compatability should be the servers.
- Thus the client (initiating entity) should send the expected API version
first, the server then using that as a basis to decide with which API
version to answer.
# Stream negotiation
Since the receiving entity for a connection is responsible for the machines it
controls it imposes conditions for connecting either as client or as federating
server. At least every initiating entity is required to authenticate itself to
the receiving entity before attempting further actions or requesting
information. But a receiving entity can require other features, such as
transport layer encryption.
To this end a receiving entity informs the initiating entity about features that
it requires from the initiating entity before taking any further action and
features that are voluntary to negotiate but may improve qualities of the stream
(such as message compression)
A varying set of conditions implies negotiation needs to take place. Since
features potentially require a strict order (e.g. Encryption before
Authentication) negotiation has to be a multi-stage process. Further
restrictions are imposed because some features may only be offered after others
have been established (e.g. SASL authentication only becoming available after
encryption, EXTERNAL mechanism only being available to local sockets or
connections providing a certificate)

View File

@ -1,93 +0,0 @@
strict digraph state {
rank = 0
subgraph "cluster_internal_state" {
rank = 1
ctr = applied
start
[shape=doublecircle, label="BFFH"]
created
[label="Machine object created"];
start -> created;
created -> attach
[label="New state or loaded from disk"];
attach
[label="Attach actor", shape=box];
unapplied
[label="Unapplied"];
applied
[label="Applied"];
verified
[label="Verified"];
wait_apply
[label="Wait ∀ Actors", shape=box]
wait_verify
[label="Wait ∀ Actors", shape=box]
unapplied -> wait_apply -> applied;
applied -> wait_verify -> verified;
applied -> unapplied
[label="statechange received"];
verified -> unapplied
[label="statechange received"];
unapplied -> unapplied
[label="statechange received"];
unapplied -> attach -> unapplied;
applied -> attach -> unapplied;
verified -> attach -> unapplied;
}
subgraph "cluster_actor" {
rank = 1
center = actor_applied
actor_start
[shape=doublecircle, label="Actor"];
actor_fresh
[label="Actor was just constructed"];
actor_start -> actor_fresh;
actor_attached
[label="Attached"];
actor_unapplied
[label="Unapplied"];
actor_applied
[label="Applied"];
actor_verified
[label="Verified"];
wait_initial
[label="Recv", shape=box];
wait_state
[label="Recv", shape=box];
actor_fresh -> wait_initial -> actor_attached;
actor_attached -> actor_applied
[label="initialize/apply"];
actor_unapplied -> actor_applied
[label="apply"];
actor_applied -> actor_verified
[label="verify"];
actor_unapplied -> wait_state;
actor_applied -> wait_state;
actor_verified -> wait_state;
wait_state -> actor_unapplied;
}
attach -> wait_initial
[label="Send initial state to that actor", style=dotted]
unapplied -> wait_state
[label="Send new state to all actors", style=dotted];
actor_applied -> wait_apply
[label="Confirm apply", style=dotted];
actor_verified -> wait_verify
[label="Confirm verify", style=dotted];
}

View File

@ -1,8 +0,0 @@
[UniqueUser]
roles = ["foorole", "barrole"]
[DuplicateUser]
roles = ["somerole"]
[DuplicateUser]
roles = ["different", "roles"]