Napp is an abbreviation of ‘Nano App’ which is a term I am using to describe a web application
that has a very small footprint. Specifically, Napp bootstraps a new application for you by
creating a project, necessary files and connects them all up so you can dive straight into
building your application.
What is a Nano App?
The rules are simple, a nano app must have as few files as possible, as few directories as possible,
little to no JavaScript (minus HTMX), has a small Docker image (size does matter) and must be easy
to develop and deploy.
If you want to take a look at how a Nano App is structured by napp, please vist Napp Generated
Below are some potential use cases for a Nano App.
UI/UX Experimentation – Use HTMX’s dynamic capabilities to experiment with different user events.
Proof of Concepts – Create a lightweight app to showcase the feasibility of a technical concept.
Admin Interfaces – Develop basic admin panels to manage users, or config settings for internal systems.
Go Practice – Use Napp as a framework to build small projects and improve your Go and web skills.
HTMX Exploration – Explore the power of HTMX for building reactive interfaces with minimal JavaScript.
Prerequisites
Go (this project is built with 1.24) but it should work with older versions too.
Make sure your Go bin is added to your path so that installed packages can be used globally.
Installation
Using Go
go install github.com/damiensedgwick/napp@latest
Using a release binary
You can download the compiled binary for your machine from Releases.
Once you have done this, make sure the executable has the correct permissions and make sure it is in your path.
Usage
Generate a new Napp
napp init <project-name>
cd <project-name>
go mod init <your-chosen-path>
go mod tidy
Other commands
Display the Napp help menu to get a list of currently available commands.
napp --help
Get the current version with the following command:
napp --version
Running the application
Go
Go has everything you need to build and run the application locally and it is
usually the default choice when wanting to develop and iterate quickly.
go run cmd/main.go
Docker
Docker has been setup is so that the binary is prebuilt using Go and then it is simply
copied into the Docker image, resulting in a smaller footprint the final Docker image.
NB The Dockerfile is currently setup to build for Linux, if your OS is not Linux
you will need to change it accordingly.
docker build -t app-name .
docker run -d -p 8080:8080 app-name
Deployment
At the moment I recommend using Fly.io for deploying Nano Apps. They provide a great
command line tool to make managing the process easy and you can have 3 x small projects
on there each with a 1gb persisted SQLite database volume for prototyping.
Getting your Nano App Deploy on Fly.io
You will need to make sure you have the Fly.io command line tools installed for the
following steps. If you do not have them installed, you can find out how to install them
here: flyctl command line tool
Once you have installed the above command line tools and signed in to Fly.io, you are
ready to proceed with the next steps, each step is supposed to be run from your project
root.
Get your app ready for deploying: fly launch --no-deploy
The command line will prompt you to check if you would like to change the default
configuration. It is important to say yes so that you can select your region and
lower the apps memory to 256mb vs the default settings.
This command should now deploy your application to Fly.io and create the
necessary resources (such as our volume) in the process.
List Machines to check attached volumes: fly machine list
# expected output
my-app-name
ID NAME STATE REGION IMAGE IP ADDRESS VOLUME CREATED LAST UPDATED APP PLATFORM PROCESS GROUP SIZE
328773d3c47d85 my-app-name stopped yul flyio/myimageex:latest fdaa:2:45b:a7b:19c:bbd4:95bb:2 vol_6vjywx86ym8mq3xv 2023-08-20T23:09:24Z 2023-08-20T23:16:15Z v2 app shared-cpu-1x:256MB
List volumes to check attached Machines: fly volumes list
# expected output
ID STATE NAME SIZE REGION ZONE ENCRYPTED ATTACHED VM CREATED AT
vol_zmjnv8m81p5rywgx created data 1GB lhr b6a7 true 5683606c41098e 3 minutes ago
Add your environment variables to Fly.io
You will need to add the following variables to your app in the Fly.io dashboard.
First navigate to your machine and then to secrets.
Add the following 2 secrets:
APPNAME_NAME_COOKIE_STORE_SECRET="value" or APP_NAME_NAME_COOKIE_STORE_SECRET="value"
and
APPNAME_NAME_DB_PATH="value" or APP_NAME_DB_PATH="value"
NB Check your generated .env file to see how your app name should be
written and add your database path value and your cookie store secret value.
Once these have both been added, we should be ready to deploy for the last time!
Run the following command to deploy your app and make use of your .env variables
fly deploy
🎉 That should be all you need to do to get your nano app deployed on Fly.io with persisted storage. 🎉
Contributing
I’d love to have your help making Napp even better! Here’s how to get involved:
Fork the repository. This creates your own copy where you can work on changes.
Create a separate branch for your changes. This helps keep your work organised.
Open a pull request with a clear description of your contributions.
Contributors
A huge shoutout to the following for contributing towards Napp and making it all that
little bit better!
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the “Software”), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
Paste middleware to produce a CADF audit trail from OpenStack API calls. Currently the following OpenStack services are supported out-of-the-box today:
Nova
Neutron
Cinder
Designate
Manila
Glance
Barbican
Ironic
Octavia
Additional APIs can be supported without major code changes through templates.
It is a major redesign of the original audit module within the keystonemiddleware. It has been invented to produce a more
verbose audit trail that can be consumed by auditors, end users and complex event processing infrastructures alike.
For that reason it does not adhere to the existing OpenStack taxonomy for CADF. The biggest difference is that it populates
the target part of the CADF event with the actual resource/object affected by the action. This is a prerequisite
to user friendly presentation of events, e.g. navigation from event to target objects. Previously, the essential information which target object has been affected by an audit-relevant action had been buried in the requestPath attribute or was
not available at all.
For operators the difference is minor though. The integration of the new middleware in the OpenStack service works the same way with the only change being different mapping files and of course the new binaries.
The figure above shows the middleware in Nova’s pipeline.
Embedding
To enable auditing, oslo.messaging should be installed. If not, the middleware
will write audit events to the application log instead.
Auditing is enabled by editing the services’s api-paste.ini file to include the following
filter definition:
To properly audit api requests, the audit middleware requires a mapping
file. The mapping files describes how to generate CADF events out of REST API calls.
The location of the mapping file should be specified explicitly by adding the
path to the audit_map_file option of the filter definition::
For each supported OpenStack services, a mapping file named
<service>_api_audit_map.yaml is included in the etc folder of this repo.
Additional options can be set:
Certain types of HTTP requests can be ignored entirely. Typically GET and HEAD
requests should not cause the creation of an audit events due to sheer volume.
# ignore any GET or HEAD requests
ignore_req_list = GET, HEAD
Payload Recording
The payload of the API response to a CRUD request can be attached to the event optionally. This will increase the size of the events, but brings a lot of value when it comes to diagnostics. Sensitive information can be filtered out using the payloads attribute of the resource mapping specification (see below).
# turn on logging on request payloads
record_payloads = True
Oslo Messaging
Audit middleware can be configured to use its own exclusive notification driver
and topic(s) value. This can be useful when the service is already using oslo
messaging notifications and wants to use a different driver for auditing e.g.
service has existing notifications sent to queue via ‘messagingv2’ and wants to
send audit notifications to a log file via ‘log’ driver.
The oslo messaging configuration for the audit middleware is at the same place as
service’s oslo messaging configuration: the service configuration file (e.g. neutron.conf)
Example shown below:
[audit_middleware_notifications]
driver = log
When audit events are sent via ‘messagingv2’ or ‘messaging’, middleware can
specify a transport URL if its transport URL needs to be different from the
service’s own messaging transport setting. Other Transport related settings are
read from oslo messaging sections defined in service configuration e.g.
‘oslo_messaging_rabbit’.
The middleware can emit statistics on emitted events using tagged statsd metrics. This requires a DogStatsD compatible statsd service like the Prometheus StatsD exporter.
# turn on metrics
metrics_enabled = True
The default StatsD host and port can be customized using environment variables:
STATSD_HOST the statsd hostname
STATSD_PORT the statsd portnumber
The following metrics and dimensions are supported
Metric
Description
Dimensions/Tags
openstack_audit_events
Statistics on audit events per tenant. This includes not yet delivered ones.
action: CADF action ID, project_id: OpenStack project/domain ID, service: OpenStack service type, target_type: CADF type URI of the target resource, outcome: failed/success/unknown
openstack_audit_events_backlog
Events buffered in memory waiting for message queue to catch up
openstack_audit_messaging_overflows
Number of lost events due to message queue latency or downtime
openstack_audit_messaging_errors
Failed attempts to push to message queue, leading to events dumped into log files
Mapping Rules
The creation of audit events is driven by so called mapping rules. The CADF mapping rules are essentially a model of resources. Using OpenStack API design patterns, this model implies how the HTTP API requests are formed.
The path of an HTTP request specifies the resource that is the target of the request. It consists of a prefix and a resource path. The resource path is denoting the resource. The prefix is used for versioning and routing. Sometimes it is even used to specify the target project of an operation (e.g. in Cinder).
In the mapping file, the prefix is specified using a regular expression. In those cases where the prefix contains the target project id, the regular expression needs to capture the relevant part of the prefix using a named match group called
project_id
prefix: '/v2.0/(?P<project_id>[0-9a-f\-]*)'
The resource path is a concatenation of resource names and IDs. URL paths follow one of the following patterns:
/<resources>: HTTP POST for create, GET for list
/<resources>/<resource-id>: HTTP GET for read, PUT for update, DELETE for remove
/<resources>/<resource-id>/action: POST to perform an action specified by the payload. The payload is expressed as {<action>: {<parameter1>: <pvalue1>, ...}}
/<resources>/<resource-id>/<custom-action>: perform a custom action specified in the mapping (otherwise this is interpreted as a key, see below)
/<resources>/<resource-id>/<key>: update a key of a resource
/<resources>/<resource-id>/<child-resource>: like top-level resource
/<resources>/<resource-id>/<child-resource>/<child-resource-id>: like top-level resource
For singletons, i.e. resources with only a single instance, the <resource-id> parts are omitted.
Additional hints are added to address exceptions to these design patterns.
Elements by Example
The mapping file starts with general information on the service:
service_type: The type of service according to the CADF type taxonomy, i.e. the root of the type hierarchy. All resources of the service are added beneath that root.
prefix: The URL prefix used by the service. Some OpenStack services specify the target project/domain in the URL, others rely on the authorization scope or special parameters.
# service type as configured in the OpenStack catalog
service_type: compute
# configure prefix, use the named match group 'project_id' to mark the tenant
prefix: '/v2[0-9\.]*/(?P<project_id>[0-9a-f\-]*)'
This is followed by a description of the service’s resource hierarchy.
The following defines a resource with the typeURI compute/servers.
resources:
servers:
# type_uri: compute/servers (default)
# el_type_uri: compute/server (default)
custom_actions:
startup: start/startup
custom_attributes:
# always attach the security_groups attribute value
# which has type compute/server/security-groups
security_groups: compute/server/security-groups
In addition to the basic resource actions implied by the HTTP method, OpenStack services can expose custom actions that go beyond CRUD. Two ways how to encoded action names in the HTTP request are common:
payload-encoded: the last component of the URL path is the action name (example)
path-encoded: the last component of the URL path is action and the payload contains the action name as the first JSON element (example
Usually all custom actions should be listed in the mapping because otherwise the last path component will be taken as a custom key of the resource or ignored right-away:
custom_actions: map custom action names to the CADF action taxonomy (default: []).
The mapping of actions is complex: For payload-encoded actions a default-mapping will be applied which determines the primary action (e.g. update) from the HTTP method and adds the action name from the payload (e.g. update/myaction).
For path-encoded actions you can reach a similar behaviour with a generic rule of the form "<method>:*": "<action>" (e.g. "POST:*": "read"). You can refer to the actual action name in the path via * (e.g. "POST:*": "update/*"). If the right side of the rule is null, the entire request will be suppressed, so that no event is emitted (e.g. "POST:*": null).
If there is no rule matching the path suffix, it will be interpreted as a key, not as an action. That means that the action will be determined from the HTTP method only and an attachment with the name key and the name of the key as content will be added to the event.
Attributes of special importance can be added to every update-like event by specifying custom attributes:
custom_attributes: list attributes of special importance whose values should always be attached to the event; Assign a type URI, so they can be shown in UIs properly (default: [])
A singleton resource is a api url call that will always lead to only one resource. Some resources exist only once,
i.e. they only have a single instance and thus no unique ID. This is controlled by the following attribute:
singleton: true when only a single instance of a resource exists. Otherwise the resource is a collection, i.e. an ID needs to be specified for address individual resource instances in a URL (default: false)
For some resources, some API designs do not follow the common OpenStack naming patterns. Those exceptions can be modelled with the following settings:
api_name: resource name in the URL path (default: <resource-name>); must be unique
type_uri: type-URI of the resource, used in the target.typeURI attribute of the produced CADF event (default: <parent-typeURI>/<resource-name>)
el_type_uri: type-URI of the resource instances (default: type_uri omitting the last character); not applicable to singletons
custom_id: indicate which resource attribute contains the unique resource ID (default: id)
custom_name: indicate which resource attribute contains the resource readable name (default: name)
type_name: JSON name for the resource, used by API designs that wrap the resource attributes into a single top-level attribute (default: api_name without leading os- prefix resp. the original resource name, but with - replaced by _)
el_type_name: JSON name of the resource instances (default: type_name omitting the last character)
Resources can be nested, meaning that a resource is part of another resource. Nesting is used to model various design patterns:
composition: a resource is really part of another resource, so that e.g. the resource is deleted when its parent is deleted.
grouping: a singleton resource is used to group related resources or custom fields.
children:
metadata:
# collection of fields/keys
singleton: true
# wrapped in a JSON element named "meta"
type_name: meta
migrations:
# defaults are all fine for this resource
interfaces:
# for some reason Nova does not use plural for the os-interfaces of a server
api_name: 'os-interface'
# in JSON payloads the resource attributes are wrapped in an element called 'interfaceAttachment'
type_name: interfaceAttachments
# the unique ID of an os-interface is located in attribute 'port_id' (not 'id')
custom_id: port_id
The configuration option to record request payloads needs some special consideration when sensitive or bulky information in involved:
payloads: controls which attributes of the request payload may not be attached to the event (e.g. because they contain very sensitive information)
enabled: set to false to disable payload recording for this resource entirely (default: true)
exclude: exclude these payload attributes from the payload attachment (black-list approach, default: [])
include: only include these payload attributes in the payload attachment(white-list approach, default: all)
In our example this looks like this:
...
os-server-password:
singleton: true
payloads:
# never record payloads for the os-server-password resource
enabled: False
flavors:
payloads:
exclude:
# filter lengthy fields with no real diagnostic value
- description
- links
Undeclared Resources
Resources that are not declared in the mapping file will be reported as unknown in the operational logs. Still the middleware tries to create events for them based on heuristics. They can be recognized by the X prefix in the resource name.
When those X-resources show up, the mapping file should be extended with an appropriate resource definition. The reason is that the heuristics to discover and map undeclared resources are not covering all kinds of requests. There are ambiguities.
Developing Audit Middleware
Contributing
This project is open for external contributions. The issue list shows what is planned for upcoming releases.
Pull-requests are welcome as long as you follow a few rules:
Ensure that the middleware cannot degrade availabilty (no crashes, no deadlocks, no synchronous remote calls)
Do not degrade performance
Include unit tests for new or modified code
Pass the static code checks
Keep the architecture intact, don’t add shortcuts, layers, …
Software Design
The purpose of this middleware is to create audit records from API calls.
Each record describes a user or system activity following the 5W1H principle:
who: which user?
what: which action? which parameters?
where: on which target resource?
when: what timestamp?
why: which service URL?
how: with what outcome (outcome, HTTP response code)
This information is gathered from the URL path and the exchanged payloads which may contain important information like resource IDs or names. Discovering that information based on the hints in the mapping file is what most of the code is about.
Complexity comes from:
different styles of encoding actions and payloads
create calls, where the target resource ID needs to be fetched from the result payload
mass vs. single operations
Components/Packages
api: Implementation of actual pipeline filter
notifier: Implementation of the asynchronous event push to the oslo bus
Paste middleware to produce a CADF audit trail from OpenStack API calls. Currently the following OpenStack services are supported out-of-the-box today:
Nova
Neutron
Cinder
Designate
Manila
Glance
Barbican
Ironic
Octavia
Additional APIs can be supported without major code changes through templates.
It is a major redesign of the original audit module within the keystonemiddleware. It has been invented to produce a more
verbose audit trail that can be consumed by auditors, end users and complex event processing infrastructures alike.
For that reason it does not adhere to the existing OpenStack taxonomy for CADF. The biggest difference is that it populates
the target part of the CADF event with the actual resource/object affected by the action. This is a prerequisite
to user friendly presentation of events, e.g. navigation from event to target objects. Previously, the essential information which target object has been affected by an audit-relevant action had been buried in the requestPath attribute or was
not available at all.
For operators the difference is minor though. The integration of the new middleware in the OpenStack service works the same way with the only change being different mapping files and of course the new binaries.
The figure above shows the middleware in Nova’s pipeline.
Embedding
To enable auditing, oslo.messaging should be installed. If not, the middleware
will write audit events to the application log instead.
Auditing is enabled by editing the services’s api-paste.ini file to include the following
filter definition:
To properly audit api requests, the audit middleware requires a mapping
file. The mapping files describes how to generate CADF events out of REST API calls.
The location of the mapping file should be specified explicitly by adding the
path to the audit_map_file option of the filter definition::
For each supported OpenStack services, a mapping file named
<service>_api_audit_map.yaml is included in the etc folder of this repo.
Additional options can be set:
Certain types of HTTP requests can be ignored entirely. Typically GET and HEAD
requests should not cause the creation of an audit events due to sheer volume.
# ignore any GET or HEAD requests
ignore_req_list = GET, HEAD
Payload Recording
The payload of the API response to a CRUD request can be attached to the event optionally. This will increase the size of the events, but brings a lot of value when it comes to diagnostics. Sensitive information can be filtered out using the payloads attribute of the resource mapping specification (see below).
# turn on logging on request payloads
record_payloads = True
Oslo Messaging
Audit middleware can be configured to use its own exclusive notification driver
and topic(s) value. This can be useful when the service is already using oslo
messaging notifications and wants to use a different driver for auditing e.g.
service has existing notifications sent to queue via ‘messagingv2’ and wants to
send audit notifications to a log file via ‘log’ driver.
The oslo messaging configuration for the audit middleware is at the same place as
service’s oslo messaging configuration: the service configuration file (e.g. neutron.conf)
Example shown below:
[audit_middleware_notifications]
driver = log
When audit events are sent via ‘messagingv2’ or ‘messaging’, middleware can
specify a transport URL if its transport URL needs to be different from the
service’s own messaging transport setting. Other Transport related settings are
read from oslo messaging sections defined in service configuration e.g.
‘oslo_messaging_rabbit’.
The middleware can emit statistics on emitted events using tagged statsd metrics. This requires a DogStatsD compatible statsd service like the Prometheus StatsD exporter.
# turn on metrics
metrics_enabled = True
The default StatsD host and port can be customized using environment variables:
STATSD_HOST the statsd hostname
STATSD_PORT the statsd portnumber
The following metrics and dimensions are supported
Metric
Description
Dimensions/Tags
openstack_audit_events
Statistics on audit events per tenant. This includes not yet delivered ones.
action: CADF action ID, project_id: OpenStack project/domain ID, service: OpenStack service type, target_type: CADF type URI of the target resource, outcome: failed/success/unknown
openstack_audit_events_backlog
Events buffered in memory waiting for message queue to catch up
openstack_audit_messaging_overflows
Number of lost events due to message queue latency or downtime
openstack_audit_messaging_errors
Failed attempts to push to message queue, leading to events dumped into log files
Mapping Rules
The creation of audit events is driven by so called mapping rules. The CADF mapping rules are essentially a model of resources. Using OpenStack API design patterns, this model implies how the HTTP API requests are formed.
The path of an HTTP request specifies the resource that is the target of the request. It consists of a prefix and a resource path. The resource path is denoting the resource. The prefix is used for versioning and routing. Sometimes it is even used to specify the target project of an operation (e.g. in Cinder).
In the mapping file, the prefix is specified using a regular expression. In those cases where the prefix contains the target project id, the regular expression needs to capture the relevant part of the prefix using a named match group called
project_id
prefix: '/v2.0/(?P<project_id>[0-9a-f\-]*)'
The resource path is a concatenation of resource names and IDs. URL paths follow one of the following patterns:
/<resources>: HTTP POST for create, GET for list
/<resources>/<resource-id>: HTTP GET for read, PUT for update, DELETE for remove
/<resources>/<resource-id>/action: POST to perform an action specified by the payload. The payload is expressed as {<action>: {<parameter1>: <pvalue1>, ...}}
/<resources>/<resource-id>/<custom-action>: perform a custom action specified in the mapping (otherwise this is interpreted as a key, see below)
/<resources>/<resource-id>/<key>: update a key of a resource
/<resources>/<resource-id>/<child-resource>: like top-level resource
/<resources>/<resource-id>/<child-resource>/<child-resource-id>: like top-level resource
For singletons, i.e. resources with only a single instance, the <resource-id> parts are omitted.
Additional hints are added to address exceptions to these design patterns.
Elements by Example
The mapping file starts with general information on the service:
service_type: The type of service according to the CADF type taxonomy, i.e. the root of the type hierarchy. All resources of the service are added beneath that root.
prefix: The URL prefix used by the service. Some OpenStack services specify the target project/domain in the URL, others rely on the authorization scope or special parameters.
# service type as configured in the OpenStack catalog
service_type: compute
# configure prefix, use the named match group 'project_id' to mark the tenant
prefix: '/v2[0-9\.]*/(?P<project_id>[0-9a-f\-]*)'
This is followed by a description of the service’s resource hierarchy.
The following defines a resource with the typeURI compute/servers.
resources:
servers:
# type_uri: compute/servers (default)
# el_type_uri: compute/server (default)
custom_actions:
startup: start/startup
custom_attributes:
# always attach the security_groups attribute value
# which has type compute/server/security-groups
security_groups: compute/server/security-groups
In addition to the basic resource actions implied by the HTTP method, OpenStack services can expose custom actions that go beyond CRUD. Two ways how to encoded action names in the HTTP request are common:
payload-encoded: the last component of the URL path is the action name (example)
path-encoded: the last component of the URL path is action and the payload contains the action name as the first JSON element (example
Usually all custom actions should be listed in the mapping because otherwise the last path component will be taken as a custom key of the resource or ignored right-away:
custom_actions: map custom action names to the CADF action taxonomy (default: []).
The mapping of actions is complex: For payload-encoded actions a default-mapping will be applied which determines the primary action (e.g. update) from the HTTP method and adds the action name from the payload (e.g. update/myaction).
For path-encoded actions you can reach a similar behaviour with a generic rule of the form "<method>:*": "<action>" (e.g. "POST:*": "read"). You can refer to the actual action name in the path via * (e.g. "POST:*": "update/*"). If the right side of the rule is null, the entire request will be suppressed, so that no event is emitted (e.g. "POST:*": null).
If there is no rule matching the path suffix, it will be interpreted as a key, not as an action. That means that the action will be determined from the HTTP method only and an attachment with the name key and the name of the key as content will be added to the event.
Attributes of special importance can be added to every update-like event by specifying custom attributes:
custom_attributes: list attributes of special importance whose values should always be attached to the event; Assign a type URI, so they can be shown in UIs properly (default: [])
A singleton resource is a api url call that will always lead to only one resource. Some resources exist only once,
i.e. they only have a single instance and thus no unique ID. This is controlled by the following attribute:
singleton: true when only a single instance of a resource exists. Otherwise the resource is a collection, i.e. an ID needs to be specified for address individual resource instances in a URL (default: false)
For some resources, some API designs do not follow the common OpenStack naming patterns. Those exceptions can be modelled with the following settings:
api_name: resource name in the URL path (default: <resource-name>); must be unique
type_uri: type-URI of the resource, used in the target.typeURI attribute of the produced CADF event (default: <parent-typeURI>/<resource-name>)
el_type_uri: type-URI of the resource instances (default: type_uri omitting the last character); not applicable to singletons
custom_id: indicate which resource attribute contains the unique resource ID (default: id)
custom_name: indicate which resource attribute contains the resource readable name (default: name)
type_name: JSON name for the resource, used by API designs that wrap the resource attributes into a single top-level attribute (default: api_name without leading os- prefix resp. the original resource name, but with - replaced by _)
el_type_name: JSON name of the resource instances (default: type_name omitting the last character)
Resources can be nested, meaning that a resource is part of another resource. Nesting is used to model various design patterns:
composition: a resource is really part of another resource, so that e.g. the resource is deleted when its parent is deleted.
grouping: a singleton resource is used to group related resources or custom fields.
children:
metadata:
# collection of fields/keys
singleton: true
# wrapped in a JSON element named "meta"
type_name: meta
migrations:
# defaults are all fine for this resource
interfaces:
# for some reason Nova does not use plural for the os-interfaces of a server
api_name: 'os-interface'
# in JSON payloads the resource attributes are wrapped in an element called 'interfaceAttachment'
type_name: interfaceAttachments
# the unique ID of an os-interface is located in attribute 'port_id' (not 'id')
custom_id: port_id
The configuration option to record request payloads needs some special consideration when sensitive or bulky information in involved:
payloads: controls which attributes of the request payload may not be attached to the event (e.g. because they contain very sensitive information)
enabled: set to false to disable payload recording for this resource entirely (default: true)
exclude: exclude these payload attributes from the payload attachment (black-list approach, default: [])
include: only include these payload attributes in the payload attachment(white-list approach, default: all)
In our example this looks like this:
...
os-server-password:
singleton: true
payloads:
# never record payloads for the os-server-password resource
enabled: False
flavors:
payloads:
exclude:
# filter lengthy fields with no real diagnostic value
- description
- links
Undeclared Resources
Resources that are not declared in the mapping file will be reported as unknown in the operational logs. Still the middleware tries to create events for them based on heuristics. They can be recognized by the X prefix in the resource name.
When those X-resources show up, the mapping file should be extended with an appropriate resource definition. The reason is that the heuristics to discover and map undeclared resources are not covering all kinds of requests. There are ambiguities.
Developing Audit Middleware
Contributing
This project is open for external contributions. The issue list shows what is planned for upcoming releases.
Pull-requests are welcome as long as you follow a few rules:
Ensure that the middleware cannot degrade availabilty (no crashes, no deadlocks, no synchronous remote calls)
Do not degrade performance
Include unit tests for new or modified code
Pass the static code checks
Keep the architecture intact, don’t add shortcuts, layers, …
Software Design
The purpose of this middleware is to create audit records from API calls.
Each record describes a user or system activity following the 5W1H principle:
who: which user?
what: which action? which parameters?
where: on which target resource?
when: what timestamp?
why: which service URL?
how: with what outcome (outcome, HTTP response code)
This information is gathered from the URL path and the exchanged payloads which may contain important information like resource IDs or names. Discovering that information based on the hints in the mapping file is what most of the code is about.
Complexity comes from:
different styles of encoding actions and payloads
create calls, where the target resource ID needs to be fetched from the result payload
mass vs. single operations
Components/Packages
api: Implementation of actual pipeline filter
notifier: Implementation of the asynchronous event push to the oslo bus
⛔️ This project has been deprecated and will not receive any regular maintenance.
The preferred authentication method for OpenStreetMap is now OAuth2, therefore this library isn’t used anymore.
ohauth
A most-of-the-way OAuth 1.0 client implementation in Javascript. Meant to be
an improvement over the default linked one
because this uses idiomatic Javascript.
GitHub – partial, full flow is not possible because access_token API is not CORS-enabled
API
// generates an oauth-friendly timestampohauth.timestamp();// generates an oauth-friendly nonceohauth.nonce();// generate a signature for an oauth requestohauth.signature("myOauthSecret","myTokenSecret","percent&encoded&base&string");// make an oauth request.ohauth.xhr(method,url,auth,data,options,callback);// options can be a header like{header: {'Content-Type': 'text/xml'}}ohauth.xhr('POST',url,o,null,{},function(xhr){// xmlhttprequest object});// generate a querystring from an objectohauth.qsString({foo: 'bar'});// foo=bar// generate an object from a querystringohauth.stringQs('foo=bar');// { foo: 'bar' }
Just generating the headers
// create a function holding configurationvarauth=ohauth.headerGenerator({consumer_key: '...',consumer_secret: '...'});// pass just the data to produce the OAuth header with optional// POST data (as long as it'll be form-urlencoded in the request)varheader=auth('GET','http://.../?a=1&b=2',{c: 3,d: 4});// or pass the POST data as an form-urlencodedvarheader=auth('GET','http://.../?a=1&b=2','c=3&d=4');
⛔️ This project has been deprecated and will not receive any regular maintenance.
The preferred authentication method for OpenStreetMap is now OAuth2, therefore this library isn’t used anymore.
ohauth
A most-of-the-way OAuth 1.0 client implementation in Javascript. Meant to be
an improvement over the default linked one
because this uses idiomatic Javascript.
GitHub – partial, full flow is not possible because access_token API is not CORS-enabled
API
// generates an oauth-friendly timestampohauth.timestamp();// generates an oauth-friendly nonceohauth.nonce();// generate a signature for an oauth requestohauth.signature("myOauthSecret","myTokenSecret","percent&encoded&base&string");// make an oauth request.ohauth.xhr(method,url,auth,data,options,callback);// options can be a header like{header: {'Content-Type': 'text/xml'}}ohauth.xhr('POST',url,o,null,{},function(xhr){// xmlhttprequest object});// generate a querystring from an objectohauth.qsString({foo: 'bar'});// foo=bar// generate an object from a querystringohauth.stringQs('foo=bar');// { foo: 'bar' }
Just generating the headers
// create a function holding configurationvarauth=ohauth.headerGenerator({consumer_key: '...',consumer_secret: '...'});// pass just the data to produce the OAuth header with optional// POST data (as long as it'll be form-urlencoded in the request)varheader=auth('GET','http://.../?a=1&b=2',{c: 3,d: 4});// or pass the POST data as an form-urlencodedvarheader=auth('GET','http://.../?a=1&b=2','c=3&d=4');
Logging tenhou paifu into excel, csv or html file with some key information.
If you like this project, please leave a star. It will be a great encouragement for me. And if you have any suggestions, please feel free to create an issue.
Since CLI-0.3.8, the project is only compatible with Python 3.10 or later.
For Python 3.9 or earlier users, please use CLI-0.3.7.1 which is the last version that supports Python 3.9 or earlier.
Or download from pypi with the following command.
Once Please enter the URL of match: appears, paste the URL and press Enter.
Note: In the latest version, you can input multiple URLs at once, separated by whatever you like. If you are lazy, you can just paste w/o anything.
After Match of {paifu name} has been recorded appears, the paifu has been successfully logged.
Inline
You can manually log the paifu by the following code.
frompaifuloggerimportget_log_func, get_paifu, localized_str, log_paifuurl="Your paifu URL"local_lang=localized_str("en") # Localizationlog_formats=get_log_func(["csv", "html"]) # Log into csv and html file.output="./"# Output directorymjai=False# Whether have output in mjai format# Log the paifu into the file.log_paifu(
url,
log_formats=log_formats,
local_lang=local_lang,
output=output,
mjai=mjai
)
# Get Paifu object, which contains all the information of the paifu.paifu=get_paifu(url, local_lang)
Features
Support multiple URLs at once.
Log paifu into excel, csv or html file with some key information. (-f, –format)
Support logging to multiple formats at once. (e.g.: -f csv -f html; -a, –all-formats)
Distinguish Sanma(3p) and Yonma(4p) and log into separate sheets.
Skip duplicated paifu
Remake the paifu with URL already logged (-r, –remake). It will be useful when we updated the logging information in future.
Customized output directory (-o, –output)
Support mjai format paifu output (–mjai). You have to run git pull --recurse-submodules first.
Localization support (-l, –language)
English: en
Traditional Chinese: zh_tw
Simplified Chinese: zh
Japanese (ChatGPT): ja
Support config file. Placing config.json in the same directory as the execution enables local configuration. For global configuration, place it in the following directories:
Support logging from Tenhou client(*.mjlog). (-c, –from-client DIR_TO_MJLOG)
Note: Typically, the saved directory is {Documents}/My Tenhou/log/ on Windows.
Match replay for html format. By clicking the match you want to review, the match replay will pop up accordingly.
Information logged
Game time
Placing
URL (for future use)
Rate before the game
The change of Rate: Note it assumes that the player has played more than 400 games.
Number of wins
Number of deal-ins
Future features
Agari analysis
Support Majsoul paifu
GUI
Contribute
We welcome all kinds of contributions, including but not limited to bug reports, pull requests, feature requests, documentation improvements, localizations…etc.
Project Blue Green: An Open Source Algae Bioreactor
Welcome to Project Blue Green, this open-source repository is your gateway to participating in the ongoing development of an efficient algae bioreactor, combining state-of-the-art technology with a DIY approach. Dive into the world of AI, ML, and IoT to cultivate algae effortlessly and contribute to a greener future.
Inclusive Technology:
Unlock the potential of AI, ML, and IoT to streamline algae growth. Our design prioritizes accessibility, ensuring that even beginners can set up and manage their bioreactors with ease.
Open-Source Innovation:
Embrace collaboration! From hardware and software to biochemistry, our project is open for contributions. Modify, enhance, and adapt – we’re building a community-driven initiative.
Simplified DIY:
Transform algae bioreactor construction into a DIY adventure. You don’t need to be a tech guru; follow our user-friendly guide, and you’ll be on your way to having your sustainable algae cultivation system up and running in no time.
Optimal Efficiency:
Project Blue Green is actively working to create the world’s most advanced yet straightforward algae bioreactor design. Through AI and ML algorithms, we’re fine-tuning the process for maximum efficiency, making sustainable practices accessible to all.
Climate Change Awareness:
More than just hardware and code, our documentation includes crucial information on climate change and global warming. Understand the environmental impact and join us in inspiring action for a cleaner, greener planet.
Community Engagement:
Join a community in the making, dedicated to positive environmental change. Whether you actively participate in development, contribute ideas, or support us through donations, every effort contributes to a more sustainable world.
Getting Started
Follow our straightforward instructions in the Documentation to start exploring and contributing to the ongoing development of your own algae bioreactor.
Contribute, share, and let’s collaborate to make Project Blue Green a driving force for sustainable practices worldwide.
Join Project Blue Green, and be a part of shaping the movement towards a more sustainable and eco-friendly future! (Note: This project is currently under active development and is not yet complete.)
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are extremely appreciated. See the open issues for a list of proposed features (and known issues).
Fork the Project
Create your Feature Branch (git checkout -b feature/AmazingFeature)
Commit your Changes (git commit -m 'Add some AmazingFeature')
Push to the Branch (git push origin feature/AmazingFeature)
Project Blue Green: An Open Source Algae Bioreactor
Welcome to Project Blue Green, this open-source repository is your gateway to participating in the ongoing development of an efficient algae bioreactor, combining state-of-the-art technology with a DIY approach. Dive into the world of AI, ML, and IoT to cultivate algae effortlessly and contribute to a greener future.
Inclusive Technology:
Unlock the potential of AI, ML, and IoT to streamline algae growth. Our design prioritizes accessibility, ensuring that even beginners can set up and manage their bioreactors with ease.
Open-Source Innovation:
Embrace collaboration! From hardware and software to biochemistry, our project is open for contributions. Modify, enhance, and adapt – we’re building a community-driven initiative.
Simplified DIY:
Transform algae bioreactor construction into a DIY adventure. You don’t need to be a tech guru; follow our user-friendly guide, and you’ll be on your way to having your sustainable algae cultivation system up and running in no time.
Optimal Efficiency:
Project Blue Green is actively working to create the world’s most advanced yet straightforward algae bioreactor design. Through AI and ML algorithms, we’re fine-tuning the process for maximum efficiency, making sustainable practices accessible to all.
Climate Change Awareness:
More than just hardware and code, our documentation includes crucial information on climate change and global warming. Understand the environmental impact and join us in inspiring action for a cleaner, greener planet.
Community Engagement:
Join a community in the making, dedicated to positive environmental change. Whether you actively participate in development, contribute ideas, or support us through donations, every effort contributes to a more sustainable world.
Getting Started
Follow our straightforward instructions in the Documentation to start exploring and contributing to the ongoing development of your own algae bioreactor.
Contribute, share, and let’s collaborate to make Project Blue Green a driving force for sustainable practices worldwide.
Join Project Blue Green, and be a part of shaping the movement towards a more sustainable and eco-friendly future! (Note: This project is currently under active development and is not yet complete.)
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are extremely appreciated. See the open issues for a list of proposed features (and known issues).
Fork the Project
Create your Feature Branch (git checkout -b feature/AmazingFeature)
Commit your Changes (git commit -m 'Add some AmazingFeature')
Push to the Branch (git push origin feature/AmazingFeature)
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Description: For my Digital Bike Speedometer, i have a Hall Effect Sensor connected to an LCD screen. The Hall Effect sensor should be connected to a static position on a bike near the wheel. Then, a Magnet should be attached to the wheel. When the magnet passes the Sensor, a value of High is received by the Arduino. The Code then uses the wheels diameter and the amount of revolutions to determine the speed and distance travelled of the bike. Then, it will print these values to the LCD screen.
Equipment needed:
Arduino
Hall Effect Sensor
Magnet
LCD screen (16×2)
Power Source (7V-12V battery)
Battery Connector
2 way switch
Breadboard
Wires
Function for calculating Speed and Distance (code)
Other Practical Skills needed: Programming, Soldering, Web Development, Wiring, Digital Electronics, Math.
This is the code that updates the Speed and Distance variables for my project;
This is the code that prints the values to the LCD Screen;
voidloop() // Continuously loops over following code to update the variables "mph" and "distance"
{
inthall_val=digitalRead(2);
if (hall_state!=hall_val&&hall_val==LOW)
{
revolutions++;
last_interval=millis()-last_fall;
last_fall=millis();
}
hall_state=hall_val;
updateSpeedAndDistance(); // Calls "updateSpeedAndDistance" function herelcd.setCursor(0, 0); // Sets Cursor to first row first columnlcd.print("Mph:"); // Prints "Mph" on screenlcd.print(mph); // Prints value for Mph on screenlcd.setCursor(0, 1); // Sets Cursor to first row second columnlcd.print("Miles:"); // Prints "Miles" on screenlcd.print(distance); // Prints value for distance on screen
}