| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
This crate is the base framework-agnostic implementation of all data
structures and methods required for IndieAuth protocol. Anything that
can deserialize HTTP request payloads with serde can utilize this
crate.
This is a good candidate to independently release on crates.io when
the interface becomes stable enough.
|
|
|
|
|
| |
I'm afraid this might've caused me to do some weird stuff with the
tempdir. Better do it like this.
|
|
|
|
| |
On query parsing error, this will return a MicropubError.
|
|
|
|
|
| |
Looks like this shared data structure will be useful to me later when
splitting off the media endpoint into its own crate.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This frees up the name for the future in-house IndieAuth
implementation and also clarifies the purpose of this module.
Its future is uncertain - most probably when the token endpoint gets
finished, it will transform into a way to query that token
endpoint. But then, the media endpoint also depends on it, so I might
have to copy that implementation (that queries an external token
endpoint) and make it generic enough so I could both query an external
endpoint or use internal data.
|
|
|
|
|
|
|
|
|
|
| |
Supported features:
- Streaming upload
- Content-addressed storage
- Metadata
- MIME type (taken from Content-Type)
- Length (I could use stat() for this one tho)
- filename (for Content-Disposition: attachment, WIP)
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Axum has streaming bodies and allows to write simpler code. It also
helps enforce stronger types and looks much more neat.
This allows me to progress on the media endpoint and add streaming
reads and writes to the MediaStore trait.
Metrics are temporarily not implemented. Everything else was
preserved, and the tests still pass, after adjusting for new calling
conventions.
TODO: create method routers for protocol endpoints
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Flake lock file updates:
• Updated input 'flake-utils':
'github:numtide/flake-utils/f7e004a55b120c02ecb6219596820fcd32ca8772' (2021-06-16)
→ 'github:numtide/flake-utils/7e2a3b3dfd9af950a856d66b0a7d01e3c18aa249' (2022-07-04)
• Updated input 'naersk':
'github:nmattia/naersk/f21309b38e1da0d61b881b6b6d41b81c1aed4e1d' (2022-05-03)
→ 'github:nmattia/naersk/cddffb5aa211f50c4b8750adbec0bbbdfb26bb9f' (2022-06-12)
• Updated input 'nixpkgs':
'github:nixos/nixpkgs/dfd82985c273aac6eced03625f454b334daae2e8' (2022-05-20)
→ 'github:nixos/nixpkgs/71a4f0dc3d80ba76f437c888c1c3d59f1df98163' (2022-07-05)
|
|
|
|
|
| |
Actually got the idea from https://xeiaso.net/, who groups xer
website's endpoints under the `.within` folder.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Kittybox's source code is moved to a subfolder
- This improves build caching by Nix since it doesn't take changes
to other files into account
- Package and test definitions were spun into separate files
- This makes my flake.nix much easier to navigate
- This also makes it somewhat possible to use without flakes (but
it is still not easy, so use flakes!)
- Some attributes were moved in compliance with Nix 2.8's changes to
flake schema
|
| |
|
|
|
|
|
|
|
|
| |
Flake lock file updates:
• Updated input 'nixpkgs':
'github:vikanezrimaya/nixpkgs/bf819aeeb2f0954506a748ff117962edc8cf732d' (2022-03-28)
→ 'github:nixos/nixpkgs/dfd82985c273aac6eced03625f454b334daae2e8' (2022-05-20)
|
|
|
|
|
|
|
|
|
|
|
| |
I said some boastful words about Kittybox being able to horizontally
scale and I wanted to prove them. This is the proof.
This test creates an NFS file server, then spawns three
VMs. Provisioning a website on one of them, it then queries the
website on all of the three machines. This shows that a shared backing
store can make Kittybox infinitely scale horizontally depending on how
much traffic you're getting.
|
| |
|
|
|
|
|
|
|
|
| |
This bit of code is still disabled for now though. I need to actually
gather and render facepiles.
Additionally, now details won't even show if there were no reactions
to the post, which saves space.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
They really use the same framework, so for now a unit test for like
posts is sufficient. Of course, for proper coverage, one can introduce
tests for bookmarks too, especially if one chooses to render them
differently. The logic will be pretty much the same though.
Replies might use the same logic, since those are also
Webmention-oriented posts.
(It looks like another way to classify MF2 documents is slowly forming
in my brain. Maybe I should write about it on my blog.)
|
|
|
|
|
|
|
| |
There were lots of unneccesary Option::unwrap() invocations that could
be replaced with `if let` statements. This makes the code cleaner and
less likely to panic in case a corrupted, incomplete or manually
injected MF2-JSON document needs to be rendered.
|
|
|
|
| |
Now everyone will know where to get my software if they see it.
|
|
|
|
|
|
| |
It mostly checks the same old things as with notes, but does check for
a name (and as it's explicitly provided, it does work with the buggy
version of the `microformats` crate.
|
|
|
|
|
|
|
|
|
|
| |
New generators include:
- Articles (h-entry with a name)
- Replies (notes with an in-reply-to)
- Likes (h-entries with a like-of)
For replies and likes, there are variants with an h-cite (full reply
context) or a link (partial reply context).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These unit tests generate a random MF2-JSON post, convert it to
MF2-HTML using the template and then read it back using the
`microformats` crate.
The only problem is that it has a nasty bug with overstuffing implied
properties. This is being worked on:
https://gitlab.com/maxburon/microformats-parser/-/issues/7
For now the tests marked as ignored because they fail. But the
function itself that generates them should remain here for
documentation and potential code sharing with the `microformats`
crate, potentially even migrating to a subcrate there.
|
| |
|
|
|
|
|
|
|
| |
These features share some code since they both require fetching reply
contexts, so it makes sense to implement them together.
TODO cover webmention sending with integration tests
|
| |
|
|
|
|
|
|
| |
It looks like previous versions did not check Content-Type and I was
able to get away with it. Warp is much more strict in that regard (and
it is good).
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Now I will know if something breaks horribly again.
|
|
|
|
|
|
| |
Iterator::skip_while() returns the last item. Reimplement the
combinator that I need using a loop over Iterator::by_ref()
instead. This will terminate after the end is reached.
|
| |
|
|
|
|
| |
Closes #4.
|
|
|
|
|
|
|
|
| |
This will ease future extraction of the media endpoint to a separate
crate. This is highly desirable since it will allow Kittybox's media
endpoint to be used separately in instances where a standalone media
endpoint is desirable (e.g. custom solutions using my code to polyfill
for desired functionality that is undesirable to implement by oneself)
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
- Somehow it looks like zlib is required, but wasn't passed
- Log level in the test is set to (mostly) info
- A needless comment is deleted
- Single-step build is enabled. Since this is a multi-crate workspace
now, naersk will not offer much in terms of incrementality (and I
use `nix develop` anyway with a dev-shell)
|
|
|
|
|
| |
Match blocks and ifs are actually perfectly usable as expressions. I
forgot about that when writing that code.
|
|
|
|
|
|
|
|
|
| |
Templates and utility types are now separate crates to speed up
compilation, linting and potential reuse/replacement.
Potentially more crates could be split out/modularized, resulting in
speedups, smaller binaries (whenever features are excluded) and even
more reuse capabilities.
|
| |
|