<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <id>https://chrisk.app/blog</id>
  <title>Christoph Kappestein | Blog</title>
  <updated>2026-04-12T21:17:00Z</updated>
  <link rel="self" href="https://chrisk.app/feed" type="application/atom+xml"/>
  <link rel="alternate" href="https://chrisk.app/blog" type="text/html"/>
  <entry>
    <id>https://chrisk.app/blog/exploring-solutions-to-build-a-secure-plugin-system-for-php-apps</id>
    <title>Exploring solutions to build a secure plugin system for PHP apps</title>
    <updated>2026-04-12T21:17:00Z</updated>
    <link rel="alternate" href="https://chrisk.app/blog/exploring-solutions-to-build-a-secure-plugin-system-for-php-apps" type="text/html"/>
    <author>
      <name>chriskapp</name>
      <uri>https://chrisk.app/</uri>
    </author>
    <summary>In this post I like to go through solutions which can be used to build a secure plugin system for PHP apps</summary>
    <category term="php"/>
    <category term="sandbox"/>
    <category term="api"/>
    <content type="html">&lt;p&gt;Cloudflare has recently released &lt;a href="https://github.com/emdash-cms/emdash"&gt;EmDash&lt;/a&gt; which is marketed as
secure &lt;a href="https://wordpress.org/"&gt;WordPress&lt;/a&gt; alternative. The main selling point is that EmDash is more secure
since it does not execute plugins in the main app like WordPress does, in WordPress a plugin can easily access every
table so that a single malicious plugin can corrupt your complete website, since in the end a plugin is only PHP code
which gets included to the main app. This is indeed a problem and many security issues are introduced due to plugins. In
this post I like to discover current solutions and ideas how we could build a secure plugin system.&lt;/p&gt;

&lt;p&gt;In general plugin systems are a central part of many projects which allow users to customize or extend specific
parts of an app. Especially for open-source projects it is a great way to build larger ecosystems, because of this
building a secure plugin system is an important part for many systems. It of course also depends on the size of the
project and for small projects it is perfectly fine to simply include PHP files as plugin. Let's start by looking into
available solutions:&lt;/p&gt;

&lt;h2&gt;Scripting language&lt;/h2&gt;

&lt;p&gt;While PHP itself is already a scripting language there are extensions to execute another scripting language within
PHP. One approach which &lt;a href="https://www.mediawiki.org/"&gt;Wikipedia&lt;/a&gt; has choosen is the integration of
&lt;a href="https://www.mediawiki.org/wiki/Lua/Overview"&gt;LUA&lt;/a&gt; as scripting language for plugins. They have also
developed the &lt;a href="https://pecl.php.net/package/LuaSandbox"&gt;LuaSandbox&lt;/a&gt; PHP extensions to control CPU and memory
limits for script execution.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.php.net/manual/en/book.v8js.php"&gt;v8js&lt;/a&gt; PHP extension integrates the &lt;a href="https://v8.dev/"&gt;V8&lt;/a&gt;
JavaScript engine. This is a heavy solution since integrating the v8 engine comes with additional complexity, since
your users need to install this additional extension which makes setup more complicated.&lt;/p&gt;

&lt;p&gt;Recently I have found &lt;a href="https://github.com/aheinze/ScriptLite"&gt;ScriptLite&lt;/a&gt; which executes a subset of
ECMAScript but without the need to include V8 since it actually parses and executes the ECMAScript in PHP. This looks
promising, but it is a very new project so we need to wait and see how it matures.&lt;/p&gt;

&lt;h2&gt;WASM&lt;/h2&gt;

In theory WASM would be the perfect solution, we can compile code from different programming languages into WASM and
then run this safely in our app, like it is also executed in the browser. But unfortunately this is currently only in
theory since there are many details missing for example there is no easy way to share complex data structures like
strings, arrays or objects. The PHP ecosystem has currently also no active maintained extension to run WASM code. So in
the future this could be a great way but the ecosystem still needs to evolve.&lt;/p&gt;

&lt;h2&gt;DSL&lt;/h2&gt;

&lt;p&gt;Besides integrating an existing scripting language an alternative solution would be to build a custom DSL, which your
users can use to customize the app. There are tools like &lt;a href="https://www.antlr.org/"&gt;ANTLR&lt;/a&gt; which really help
to develop such custom DSLs but this is of course also a complex task and this makes only sense if you build a domain
specific language and not a general programming language, since in this case you would be better off using an existing
scripting language.&lt;/p&gt;

&lt;h2&gt;Transpiler&lt;/h3&gt;

&lt;p&gt;A transpiler parses PHP code and removes from the AST all dangerous code so that you (in theory) only execute a safe
sub-set of PHP. I have also built such a &lt;a href="https://github.com/apioo/psx-sandbox"&gt;Transpiler&lt;/a&gt; but in the end
this is of course also not safe and still a security risk since there will be always security issues where the
transpiler does not correctly sanitize the code.&lt;/p&gt;

&lt;h2&gt;REST API&lt;/h2&gt;

&lt;p&gt;Instead of running your plugin inside your code with this idea a plugin is a small service which provides a REST API
to handle the logic. This is really secure since the plugin runs in a complete separate environment without access to
the database or filesystem. This is also used by Shopify and I have recently found &lt;a href="https://wp-apps.org/"&gt;WPApps&lt;/a&gt;
which brings this idea to WordPress. But of course this setup is more complex since you now need to host every plugin.&lt;/p&gt;

&lt;h2&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;To reference back to EmDash and WordPress, if PHP would have a stable scripting language the WordPress plugin system
would be (maybe) in a better state, since it could be used for safe execution of such plugins. From the projects we can also see
that there is a demand. There are probably also many closed-source apps which have this requirement. I think the PHP
ecosystem would greatly benefit from a solid and stable scripting language to run untrusted code.&lt;/p&gt;

&lt;p&gt;Looking into the future if such a scripting language evolves we may also create a PSR standard for script execution.
I.e. we could define a simple interface and different environments could provide different implementations to enable
script execution.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;interface ScriptEngineInterface {

    /**
     * Executes the provided script code
     */
    public function execute(string $code, Context $context): mixed;

}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This is of course only a rough idea, but it would be a great step forward to build more secure systems.&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://chrisk.app/blog/matomo-setup-auto-archiving-with-docker</id>
    <title>Matomo setup Auto-Archiving with Docker</title>
    <updated>2025-09-21T20:32:00Z</updated>
    <link rel="alternate" href="https://chrisk.app/blog/matomo-setup-auto-archiving-with-docker" type="text/html"/>
    <author>
      <name>chriskapp</name>
      <uri>https://chrisk.app/</uri>
    </author>
    <summary>In this post I will shortly explain how to set up Matomo Auto-Archiving using Docker</summary>
    <category term="matomo"/>
    <category term="analytics"/>
    <content type="html">&lt;p&gt;Recently I have migrated all analytics of our projects from Google Analytics to Matomo. &lt;a href="https://matomo.org/"&gt;Matomo&lt;/a&gt; is
a self-hosted alternative to Google Analytics with better privacy handling. Since all our projects run on &lt;a href="https://github.com/apioo/fusio-plant"&gt;Plant&lt;/a&gt;
we use the official &lt;a href="https://hub.docker.com/_/matomo/"&gt;Docker-Image&lt;/a&gt; to run Matomo.&lt;/p&gt;

&lt;p&gt;For larger websites it is recommended to set up &lt;a href="https://matomo.org/faq/on-premise/how-to-set-up-auto-archiving-of-your-reports/"&gt;Auto-Archiving&lt;/a&gt;,
which helps to process our analytics data in the background. The docs explain the setup for a plain installation but to execute the cron
in side the docker container you need a different command. Our host server runs on Ubuntu 24.04 and to execute the cron we simply configure the following cron
on our host system:&lt;/p&gt;

&lt;pre&gt;5 * * * * root /usr/bin/docker exec -t [container_name] /usr/bin/bash -c -i "./console core:archive --url=https://[your_url]/" &gt; /tmp/matomo.log&lt;/pre&gt;

&lt;p&gt;This is basically only s short post to document this for the future and maybe this can also help others.&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://chrisk.app/blog/fusio-6.0-released</id>
    <title>Fusio 6.0 released</title>
    <updated>2025-09-06T14:30:00Z</updated>
    <link rel="alternate" href="https://chrisk.app/blog/fusio-6.0-released" type="text/html"/>
    <author>
      <name>chriskapp</name>
      <uri>https://chrisk.app/</uri>
    </author>
    <summary>In this post I will talk about the latest Fusio 6.0 major release</summary>
    <category term="fusio"/>
    <category term="api"/>
    <category term="rest"/>
    <category term="api-management"/>
    <category term="api-gateway"/>
    <category term="backend"/>
    <content type="html">&lt;p&gt;Today we have released the next major version 6.0 of &lt;a href="https://www.fusio-project.org/"&gt;Fusio&lt;/a&gt;. Fusio is an open source
API management platform which helps to build APIs. In this post I like to talk about the new features of the 6.0 release. For all technical
details you can take a look at the &lt;a href="https://www.fusio-project.org/blog/post/fusio-6.0-released"&gt;release post&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;Connection designer&lt;/h2&gt;

&lt;p&gt;From the beginning Fusio was always a tool with a feature-set between an API gateway (like &lt;a href="https://konghq.com/"&gt;Kong&lt;/a&gt; or &lt;a href="https://tyk.io/"&gt;Tyk&lt;/a&gt;
to route traffic to internal services and handle authorization etc.) and a backend (like &lt;a href="https://firebase.google.com/"&gt;Firebase&lt;/a&gt; or &lt;a href="https://supabase.com/"&gt;Supabase&lt;/a&gt;
where a developer can build actual endpoints). The backend part was always a bit behind since you would need to use external tools i.e. like
a database management tool or an HTTP client to build a backend. I think with the 6.0 version we have finally completed the backend feature-set so that it is
now possible to build complete apps without leaving Fusio. This is possible through the new connection designer panels which we have implemented s.&lt;/p&gt;

&lt;img src="https://chrisk.app/img/blog/fusio-6.0-released/connection_list.png" alt="Connection list" class="img-fluid mb-3"&gt;

&lt;p&gt;The connection list contains external service which you want to use in your app, in this example we have a mysql database connection, a remote
HTTP starwars API and a local cache dir folder on the filesystem. At each connection there is now a new terminal button right beside the edit button
(which we call designer) which you can use for each connection. The button redirects you to the fitting designer for the connection type.
In the following I will cover the three new designer types.&lt;/p&gt;

&lt;h3&gt;Database&lt;/h3&gt;

&lt;p&gt;The database designer contains an overview of all tables, it is possible to create and modify the schema of each table.
It is also possible to view and edit rows on each table. Through this you can design and manage the database schema for your app.&lt;/p&gt;

&lt;img src="https://chrisk.app/img/blog/fusio-6.0-released/connection_database.png" alt="Database connection designer" class="img-fluid mb-3"&gt;

&lt;h3&gt;HTTP&lt;/h3&gt;

&lt;p&gt;The HTTP designer provides a small HTTP client which can be used to invoke external APIs. This can be useful to test an API
before actually implementing it in an action.&lt;/p&gt;

&lt;img src="https://chrisk.app/img/blog/fusio-6.0-released/connection_http.png" alt="HTTP connection designer" class="img-fluid mb-3"&gt;

&lt;h3&gt;Filesystem&lt;/h3&gt;

&lt;p&gt;The Filesystem designer shows all files at the filesystem connection and provides also a way to upload new files.&lt;/p&gt;

&lt;img src="https://chrisk.app/img/blog/fusio-6.0-released/connection_filesystem.png" alt="Filesystem connection designer" class="img-fluid mb-3"&gt;

&lt;p&gt;Currently, we have also some connections like a Message-Queue or MongoDB connection which has no designer, but we plan to implement this
in the future. The designer panels help also to close an important feature gap for a potential Fusio cloud service, so that developers can register
and build complete apps directly within Fusio.&lt;/p&gt;

&lt;h2&gt;MCP (Model-Context-Protocol)&lt;/h2&gt;

&lt;p&gt;With Fusio we don't want to jump directly on the AI hype train, but we have looked at it carefully and found a great way how we can help LLMs to
interact with Fusio, which also stays true to the self-hosted spirit so that we don't force our users to use external APIs. The solution is the
integration of an &lt;a href="https://modelcontextprotocol.io/"&gt;MCP server&lt;/a&gt;. Through this you can now invoke all operations through an LLM.
To integrate the MCP server we support the stdio transport through a simple command:&lt;/p&gt;

&lt;pre&gt;php bin/fusio mcp&lt;/pre&gt;

&lt;p&gt;We have also experimental support for the HTTP transport but this is disabled by default, so you need to activate this in the configuration, if
enabled you can use the &lt;code&gt;/mcp&lt;/code&gt; endpoint. Through this MCP server you can invoke the internal backend operations via an LLM, which you can
use to build your API, but the same logic can be also used for the API which you have build with Fusio, thus we have created automatically an MCP
server for all our users. This feature is completely new but provides many exciting possibilities how you can use and integrate Fusio with LLMs.&lt;/p&gt;

&lt;h2&gt;OAuth2 authorization server&lt;/h2&gt;

&lt;p&gt;A great feature of Fusio are the dedicated apps which you can install through our marketplace. For example the developer app, which helps to
build a developer portal or also the backend app which is used to manage your Fusio instance. All those apps are basically javascript apps which
work with the Fusio API.&lt;/p&gt;

&lt;p&gt;With this release we have implemented a new OAuth2 authorization server which automatically integrates with every app, this means that
you automatically get a new OAuth2 login button s.&lt;/p&gt;

&lt;img src="https://chrisk.app/img/blog/fusio-6.0-released/oauth2_redirect.png" alt="OAuth2 login button" class="img-fluid mb-3"&gt;

&lt;p&gt;On click on the "Fusio" login button you get redirected to the internal OAuth2 authorization server.&lt;/p&gt;

&lt;img src="https://chrisk.app/img/blog/fusio-6.0-released/oauth2_authorize.png" alt="OAuth2 authorization" class="img-fluid mb-3"&gt;

&lt;p&gt;There a user needs to authenticate and approve the authorization. The user has also the option to deselect specific scopes so that the app
can access only specific parts of the API. This also helps our users to build complete new apps and use Fusio as authorization server. Since
we also only follow the OAuth2 specification a user can easily later on swap the OAuth2 server with a different provider.&lt;/p&gt;

&lt;h2&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;Personally I have the feeling that the 6.0 release marks a really great milestone for the Fusio project, which is now solid open source
platform to build APIs. If you need to build an API or you have an API related task feel free to give Fusio a try. For more information you can
take a look at our &lt;a href="https://www.fusio-project.org/"&gt;website&lt;/a&gt; and if you want support us you can also give us a star on
&lt;a href="https://github.com/apioo/fusio"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://chrisk.app/blog/lessons-learned-from-building-a-docker-based-server-panel</id>
    <title>Lessons learned from building a Docker-based server panel</title>
    <updated>2025-06-28T17:03:00Z</updated>
    <link rel="alternate" href="https://chrisk.app/blog/lessons-learned-from-building-a-docker-based-server-panel" type="text/html"/>
    <author>
      <name>chriskapp</name>
      <uri>https://chrisk.app/</uri>
    </author>
    <summary>This post talks about the lessons learned from building a Docker-based server panel to self-host apps.</summary>
    <category term="docker"/>
    <category term="self-host"/>
    <category term="server"/>
    <category term="panel"/>
    <content type="html">&lt;p&gt;Today I have released a first version of &lt;a href="https://github.com/apioo/fusio-plant"&gt;Fusio Plant&lt;/a&gt; which is an open source
server management tool to easily self-host apps on your server, in this post I like to share some experiences of this process.&lt;/p&gt;

&lt;p&gt;While building Plant, we had the goal to keep the host as small as possible, we basically only wanted to install Nginx and Docker
and start all projects through a simple &lt;code&gt;docker-compose.yml&lt;/code&gt; file. We also wanted to run the Plant app itself as docker container
so that it is easy to update the server panel itself. Moving the Plant app into a docker container means also that we can no
longer directly execute commands on the host. In general, this is a good thing, but for a server panel we need the option to change
i.e. the Nginx configuration or run Docker commands on the host.&lt;/p&gt;

&lt;h2&gt;Host Docker communication&lt;/h2&gt;

&lt;p&gt;In the development process, we had developed three versions to enable this Docker to host communication. These ideas are maybe
also useful for other scenarios, so I will walk you through each version.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Cron&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;The first implementation used a simple cronjob which executed a bash script. The bash script walked through each file in an
&lt;code&gt;input/&lt;/code&gt; folder, executed each file and wrote the output to an &lt;code&gt;output/&lt;/code&gt; folder. Those &lt;code&gt;input/&lt;/code&gt;
and &lt;code&gt;output/&lt;/code&gt; folders are also mounted into the docker container and inside the container we wrote a command into the
&lt;code&gt;input/&lt;/code&gt; folder and waited until the result was available at the &lt;code&gt;output/&lt;/code&gt; folder.&lt;/p&gt;

&lt;p&gt;The biggest limitation of this solution was speed since cron can execute a script only once every minute. This means in the worst
case, a user needs to wait almost a minute for the command to be executed.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;inotifywait&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;To improve the performance we tried to use &lt;code&gt;inotifywait&lt;/code&gt; which is basically a tool which you can use to listen for
file changes inside a folder. The setup is basically identical to the cron version but instead of cron we use &lt;code&gt;inotifywait&lt;/code&gt;
to listen for file changes inside the &lt;code&gt;input/&lt;/code&gt; folder and then write the result to the &lt;code&gt;output/&lt;/code&gt; folder.
To run this script we have also used &lt;a href="https://supervisord.org/"&gt;supervisord&lt;/a&gt; to keep the bash script alive in case of errors.&lt;/p&gt;

&lt;p&gt;Initially this has worked and improved the performance greatly, but unfortunately there are some scenarios where &lt;code&gt;inotifywait&lt;/code&gt;
could not detect file changes in case the Docker app placed multiple files into the &lt;code&gt;input/&lt;/code&gt; folder.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Named pipe&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;As the last and final solution, we have changed the input folder to a &lt;a href="https://en.wikipedia.org/wiki/Named_pipe"&gt;named pipe&lt;/a&gt;.
We can then mount this pipe into our Docker container and write events directly into this pipe. On the host we can then listen to this
pipe and execute events directly on occurrence.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/apioo/fusio-plant/blob/main/bash/executor.sh#L157"&gt;bash script&lt;/a&gt; on the host can now
easily listen for changes on the pipe s.&lt;/p&gt;

&lt;pre&gt;
input=/opt/plant/input

while true
do
  while read -r line; do execute_command "$line"; done &lt; $input
  sleep 1
done
&lt;/pre&gt;

&lt;p&gt;To write events to the pipe, we can use basic file functions s.&lt;/p&gt;

&lt;pre&gt;
$input = fopen('/tmp/input', 'w');
fwrite($input, '{"type": "my_command"}');
fclose($input);
&lt;/pre&gt;

&lt;p&gt;After writing the command to the pipe we can directly check the &lt;code&gt;/output&lt;/code&gt; file and wait for
a response. To veriy that the script was fully executed we check for a specific &lt;code&gt;--EOF-MARKER--&lt;/code&gt;
marker at the end of the file and if its available we stop reading the file.&lt;/p&gt;

&lt;pre&gt;
$response = '';
$outputFile = '/tmp/output';
$output = fopen($outputFile, 'r');
$count = 0;
while ($count &lt; 32) {
    $size = filesize($outputFile);
    if ($size &gt; 0) {
        $response.= fread($output, $size);
    }

    if (str_contains($response, '--EOF-MARKER--')) {
        break;
    }

    usleep(100_000);
    clearstatcache();

    $count++;
}

fclose($output);
&lt;/pre&gt;

&lt;p&gt;The &lt;code&gt;clearstatcache&lt;/code&gt; call in the snippet above is crucial since PHP automatically caches the responses
of the &lt;code&gt;filesize&lt;/code&gt; function, so we need to clear the cache on every iteration to get the live file size which
often changes on command execution.&lt;/p&gt;

&lt;h2&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;As a conclusion, we have looked into three solutions to enable Docker host communication. In our case, the named pipe
solution works perfectly since it is fast and lightweight, and we can now easily execute commands on the host system.
In case you are interested in a new Docker-based server management tool, check out the &lt;a href="https://github.com/apioo/fusio-plant"&gt;Plant GitHub repository&lt;/a&gt;.&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://chrisk.app/blog/typeschema-a-json-specification-to-describe-data-models</id>
    <title>TypeSchema a JSON specification to describe data models</title>
    <updated>2025-01-02T23:03:00Z</updated>
    <link rel="alternate" href="https://chrisk.app/blog/typeschema-a-json-specification-to-describe-data-models" type="text/html"/>
    <author>
      <name>chriskapp</name>
      <uri>https://chrisk.app/</uri>
    </author>
    <summary>This post talks about the latest TypeSchema version and how it can be used to generate models in different environments.</summary>
    <category term="type-schema"/>
    <category term="json-schema"/>
    <category term="data"/>
    <category term="model"/>
    <category term="specification"/>
    <content type="html">&lt;p&gt;In this post I like to talk about the &lt;a href="https://typeschema.org/"&gt;TypeSchema&lt;/a&gt; specification and the changes of the latest version.&lt;/p&gt;

&lt;p&gt;To start, &lt;a href="https://typeschema.org/"&gt;TypeSchema&lt;/a&gt; is a JSON specification to describe data models in a language neutral format.
Basically it can be seen as an alternative to JSON schema with a focus on code generation (and not validation).
It helps you to build type-safe applications by sharing core data models in different environments.&lt;/p&gt;

&lt;p&gt;The TypeSchema specification is reversible, this means you can transform a TypeSchema specification into actual code and then use
a reflection library to turn this code back into a TypeSchema specification without any data loss s.&lt;/p&gt;

&lt;hr&gt;

&lt;figure&gt;
&lt;pre style="text-align:center"&gt;
Generator           Reflection
|                   |
TypeSchema ---&gt; Generated Code ---&gt; TypeSchema
&lt;/pre&gt;
&lt;/figure&gt;

&lt;hr&gt;

&lt;p&gt;In this case the TypeSchema on the left is identical to the TypeSchema on the right. To give you a practical example lets
take a look at the following TypeSchema specification:&lt;/p&gt;

&lt;h2&gt;TypeSchema&lt;/h2&gt;

&lt;pre&gt;{
  "definitions": {
    "Student": {
      "type": "struct",
      "properties": {
        "firstName": {
          "type": "string"
        },
        "lastName": {
          "type": "string"
        },
        "age": {
          "type": "integer"
        }
      }
    }
  },
  "root": "Student"
}
&lt;/pre&gt;

&lt;p&gt;Through the code generator we can turn this specification into actual code, in this example
we use the Java generator.&lt;/p&gt;

&lt;h2&gt;Generated Java Code&lt;/h2&gt;

&lt;pre&gt;import com.fasterxml.jackson.annotation.*;

public class Student {
    private String firstName;
    private String lastName;
    private Integer age;

    @JsonSetter("firstName")
    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }

    @JsonGetter("firstName")
    public String getFirstName() {
        return this.firstName;
    }

    @JsonSetter("lastName")
    public void setLastName(String lastName) {
        this.lastName = lastName;
    }

    @JsonGetter("lastName")
    public String getLastName() {
        return this.lastName;
    }

    @JsonSetter("age")
    public void setAge(Integer age) {
        this.age = age;
    }

    @JsonGetter("age")
    public Integer getAge() {
        return this.age;
    }
}
&lt;/pre&gt;

&lt;p&gt;Now we can use the &lt;a href="https://github.com/apioo/typeschema-reflection-java"&gt;reflection library&lt;/a&gt; to transform this model back into a TypeSchema
specification which looks exactly like the schema defined above.&lt;/p&gt;

&lt;p&gt;This should give you a rough understanding how TypeSchema works, for more details please take a look at the &lt;a href="https://typeschema.org/"&gt;website&lt;/a&gt;
or &lt;a href="https://app.typehub.cloud/d/typehub/typeschema"&gt;specification&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;Changes&lt;/h2&gt;

&lt;p&gt;With the latest version we have moved away from the JSON Schema compatibility which we had for some years, this means we now use dedicated keywords
which are not compatible with JSON Schema so you need to decide whether you want to use TypeSchema or JSON Schema. The following list covers the
important changes.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Validation&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;We have removed all validation keywords from our specification i.e. &lt;code&gt;required&lt;/code&gt; or &lt;code&gt;minLength&lt;/code&gt; to make clear that TypeSchema
helps you only to model your data, it is not intended to validate your data. We also no longer use the dollar sign &lt;code&gt;$&lt;/code&gt; at our keywords since
they make it more complicated for code generators to process.&lt;/p&gt;

&lt;p&gt;We think that validation must be done in your domain layer, where you also generate fitting error messages. At TypeSchema you only describe which
fields are available and our code generator can then generate DTOs for every object structure. These DTOs can then be used at your domain layer
to validate the incoming data.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Union&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;We have removed support of the &lt;code&gt;oneOf&lt;/code&gt; keyword. At the code generator we have noticed that for dynamically typed languages it is easy
to represent actual unions, statically typed languages like Java or C# have a much harder time to represent such dynamic data types. But they all
can represent a tagged union. This is also the concept which is now supported at TypeSchema, instead of guessing the fitting schema
a user needs to provide a concrete type identifier which is mapped to a concrete type definition. This is also
&lt;a href="https://github.com/apioo/typeschema/blob/master/specification/typeschema.json#L130"&gt;heavily used&lt;/a&gt; at our meta-schema to describe the TypeSchema
specification itself.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Type&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;Because of this union change we now also require a type property on every type. For example previously you could use the &lt;code&gt;$ref&lt;/code&gt; keyword
now you need to use the "reference" type s.&lt;/p&gt;

&lt;pre&gt;{
  "type": "reference",
  "target": "My_Type"
}
&lt;/pre&gt;

&lt;p&gt;I'm really happy with the current TypeSchema version and I think we have made many solid designe choices for the future. Basically TypeSchema could
evolve into a general JSON format to represent a model in a language neutral format.&lt;/p&gt;

&lt;h2&gt;Ecosystem&lt;/h2&gt;

&lt;p&gt;To give you a short outlook into the ecosystem, there are several projects in development which are basically based on TypeSchema. At first there is
a new specification called &lt;a href="https://typeapi.org/"&gt;TypeAPI&lt;/a&gt; which helps to describe complete REST APIs for code generation, which internally
also uses the TypeSchema models. Then there is a platform called &lt;a href="https://typehub.cloud/"&gt;TypeHub&lt;/a&gt; which helps to manage TypeSchema/TypeAPI
specifications and the &lt;a href="https://sdkgen.app/"&gt;SDKgen&lt;/a&gt; app which provides a great code generator to turn an TypeSchema/TypeAPI specification
into client code.&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://chrisk.app/blog/introducing-the-deutschlandapi-project</id>
    <title>Introducing the DeutschlandAPI project</title>
    <updated>2024-09-07T23:13:00Z</updated>
    <link rel="alternate" href="https://chrisk.app/blog/introducing-the-deutschlandapi-project" type="text/html"/>
    <author>
      <name>chriskapp</name>
      <uri>https://chrisk.app/</uri>
    </author>
    <summary>In this post I like to make a short introduction to the DeutschlandAPI project and share general thoughts about the concept of a country API.</summary>
    <category term="project"/>
    <category term="api"/>
    <category term="open-government"/>
    <category term="open-data"/>
    <content type="html">&lt;p&gt;I like to introduce you to the &lt;a href="https://deutschland-api.dev/"&gt;DeutschlandAPI&lt;/a&gt; project. The DeutschlandAPI
is basically an open API which combines and aggregates multiple open government APIs of germany into a single
consistent and easy to use API. This project is a first step to help developers get open and public information
of germany.&lt;/p&gt;

&lt;p&gt;While building this API I thought about the general concept of a country API. Basically a standard API which is provided
by each country to get all available information of the country. This could enable many great use-cases where an
app can dynamically get all up-to-date information about a specific country. In the following I will go through
interesting fields which could be useful for such a country API.&lt;/p&gt;

&lt;h2&gt;Geodata&lt;/h2&gt;

&lt;p&gt;At first the API should expose all basic information how the land of a country is structured, in germany we can
split this up in states, districts and cities but for other countries this may be different. This could also include
streets so that apps can always validate correct address data and build up-to-date dropdowns.

&lt;h2&gt;Companies&lt;/h2&gt;

&lt;p&gt;Every developed country has some sort of register where every official company is registered. In germany we have the
&lt;a href="https://www.bundesanzeiger.de/"&gt;Bundesanzeiger&lt;/a&gt; which basically is such a register, you can see also the annual accounts
for each company, but I think it would be enough to have a register which lists the company names, corporate form, business objective,
the physical address and a link to the website.&lt;/p&gt;

&lt;p&gt;The Bundesanzeiger has unfortunately no public API so it is not included in the DeutschlandAPI, but I would like to add this in
the future. Such an endpoint could make it easier to get information about all companies inside a country. Today there are even services
available which sell these kind of company information, but I think it would add a great value to have such directories free available.&lt;/p&gt;

&lt;h2&gt;Warnings&lt;/h2&gt;

&lt;p&gt;Most countries have also a basic warning system, to warn the citizens in case of fire, extreme whether, biological or nuclear events.
In germany we have the &lt;a href="https://www.bbk.bund.de/DE/Warnung-Vorsorge/Warnung-in-Deutschland/MoWaS/mowas_node.html"&gt;MoWaS&lt;/a&gt; which
basically provides a warning system covering those events. This is also integrated in the DeutschlandAPI project.&lt;/p&gt;

&lt;h2&gt;Statistics&lt;/h2&gt;

&lt;p&gt;It would be great to have general statistics about a country like the population, GDP or unemployment rate. In germany we have the
&lt;a href="https://www-genesis.destatis.de"&gt;Statistisches Bundesamt&lt;/a&gt; which collects all kind of statistical information of germany,
but there are really many statistics about many topics like the population, education, habitation, economy, trade, finance etc. I am
currently trying to figure out which of these information would be really valuable for such a statistic endpoint.&lt;/p&gt;

&lt;h2&gt;Jobs&lt;/h2&gt;

&lt;p&gt;Most countries have a job search which is managed by the government. In germany we have the &lt;a href="https://www.arbeitsagentur.de/"&gt;Arbeitsagentur&lt;/a&gt;
which basically is such a job platform managed by the government. This could help to integrate job searches into different apps depending on the
country. There are of course also many private networks available like &lt;a href="https://www.linkedin.com/"&gt;LinkedIn&lt;/a&gt; but it would be great to integrate
a general job search provided by the government.&lt;/p&gt;

&lt;h2&gt;Electricity&lt;/h2&gt;

&lt;p&gt;A key information of each country is also the power production and consumption. We could add an endpoint where we simply return
the current (or maybe also historical) values how much power a country has produced or consumed. This is also a great indicator
how developed a country is.&lt;/p&gt;

&lt;h2&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;The mentioned fields are first ideas which could be useful for a general country API. Those described endpoints are also read-only,
if we think about &lt;code&gt;POST&lt;/code&gt; endpoints we could open up also many more interesting use cases like replacing emergency numbers like
110 with a post endpoint where every citizen could send a request. But those endpoints would require a safe authentication of a citizen
which is currently not possible. With the DeutschlandAPI project I have tried to build a first country API, if you also like to implement
a similar API for your country please take a look at our &lt;a href="https://api.deutschland-api.dev/apps/redoc/"&gt;API documentation&lt;/a&gt;.&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://chrisk.app/blog/building-infrastructure-software-without-central-authority</id>
    <title>Building infrastructure software without central authority</title>
    <updated>2024-08-30T18:40:00Z</updated>
    <link rel="alternate" href="https://chrisk.app/blog/building-infrastructure-software-without-central-authority" type="text/html"/>
    <author>
      <name>chriskapp</name>
      <uri>https://chrisk.app/</uri>
    </author>
    <summary>In this post I think about building software as infrastructure tool which works without central authority</summary>
    <category term="idea"/>
    <category term="software"/>
    <category term="decentralization"/>
    <content type="html">&lt;p&gt;To better explain the problem I am thinking about I like to go back in time.&lt;/p&gt;

&lt;h2&gt;Freshmeat&lt;/h2&gt;

&lt;p&gt;At the start of my open source journey there was a website called Freshmeat, which was used by developers to announce new
releases, you could also browse and find existing projects. This was before GitHub existed and at that time I really liked
this site as a central place to get information. Over time the demand decreased and the site closed, there is currently still
a replacement active called &lt;a href="https://freshcode.club"&gt;freshcode.club&lt;/a&gt; but it has no longer the traction of the original service.&lt;/p&gt;

&lt;h2&gt;Awesome lists&lt;/h2&gt;

&lt;p&gt;I think the modern successor of Freshmeat are so-called "Awesome lists" on GitHub, basically these are simply repositories
with markdown files containing links to interesting tools. The reason why these lists overtook software directories like Freshmeat
is probably socially, because now every user could add suggestions by opening a pull request. This is really great and has
helped to create "awesome" lists but over time this has still some problems:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;Since every repository has a central owner which needs to accept and merge pull requests there are many lists where
    the original author is no longer active so that nobody can modify an existing list. You could of course fork such
    a list but then you have many duplicated lists and nobody knows the currently active maintained list.&lt;/li&gt;
    &lt;li&gt;Most lists contain also out-dated content since it is really difficult to keep those lists up-to-date.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Infrastructure software&lt;/h2&gt;

&lt;p&gt;Software directories like Freshmeat and also Awesome lists have both the problem that they have a central authority
which needs to curate each entry. I am currently thinking about a solution for this problem so that we
can build software without a central authority. This could allow us to create lists which are always up-to-date and evolve
over a long period of time. Because of this I like to call this "infrastructure software" basically a piece of software which
can run on a server without moderation or central authority.&lt;/p&gt;

&lt;p&gt;You may think, this sounds cool but how could this be possible and how can you prevent spam?&lt;/p&gt;

&lt;p&gt;I think we should come back to the basic building blocks of the internet. To protect against spam we need a resource which is limited,
for the internet the perfect limitation is a domain. You can only add an entry to our list if you have a custom domain, this
automatically limits participation only to users with a custom domain. To verify the ownership of the domain we could add
a verification through a DNS TXT record.&lt;/p&gt;

&lt;p&gt;As second step if a domain was registered to our list we reverse the data flow, instead of adding an entry directly to a central
database the domain needs to provide an endpoint where the information gets returned. For example, we could request on
every domain the following path &lt;code&gt;/.well-known/software.json&lt;/code&gt; and check the response, on success this
endpoint returns all information of our entry on the list.&lt;/p&gt;

&lt;p&gt;Through this the owner of the domain can easily update the information of the entry by simply updating the &lt;code&gt;software.json&lt;/code&gt;
resource on the server. Our list aggregation service can then check all domains periodically for updates. We could also delete or hide
an entry in case the resource returns an invalid status code like 404 or 500.&lt;/p&gt;

&lt;h2&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;I think this idea is really great and could help to build software directories which could be always up-to-date and run
for a really long time. Every user on the internet can add a new entry by submitting a custom domain to the list.
There are still some challenges since we need a great JSON format to describe a software entry and there needs to be
a great aggregation service which provides an intuitive UI, but those technical challenges are easy solvable.&lt;/p&gt;

&lt;p&gt;Currently this is still an early idea and there is no implementation, so please &lt;a href="https://www.apioo.de/en/contact"&gt;contact me&lt;/a&gt;
if you like the idea and would participate on such a directory. I am really motivated to move this forward to make the web more decentralized
and better.&lt;/p&gt;</content>
  </entry>
  <entry>
    <id>https://chrisk.app/blog/reboot-my-personal-blog</id>
    <title>Reboot my personal blog</title>
    <updated>2024-08-24T14:57:00Z</updated>
    <link rel="alternate" href="https://chrisk.app/blog/reboot-my-personal-blog" type="text/html"/>
    <author>
      <name>chriskapp</name>
      <uri>https://chrisk.app/</uri>
    </author>
    <summary>This post introduces the reboot of my personal blog, I will explain the reasons and the topics which you can
            expect in the future.</summary>
    <category term="meta"/>
    <content type="html">&lt;p&gt;With this post I like to reboot my personal blog. In the past I have used for a long time
&lt;a href="https://medium.com/@chriskapp"&gt;medium.com&lt;/a&gt; as blogging platform but there are several reasons why I am
no longer happy with the platform. I have thought about choosing one of the thousand alternative platforms for blogging
but in the end they all have downsides. Instead of going through all downsides I like to share my thoughts about the
advantages of a self-hosted blog, maybe I can convince also some readers to start again with a self-hosted blog.&lt;/p&gt;

&lt;h2&gt;Data sovereignty&lt;/h2&gt;

&lt;p&gt;If you host your own blog the content of your blog is under your control. An external blogging platform is a company
which needs to make money, they need to somehow use your blogging content to create revenue since they offer the platform
mostly for free. This is done by adding some kind of partner program or paywalls. On your own blog you are complete free
from those topics and you can get sure that your content is not misused.&lt;/p&gt;

&lt;h2&gt;Data durability&lt;/h2&gt;

&lt;p&gt;Your thoughts and ideas which you write are an important part of your legacy. You probably want that this content
is also available after your lifetime. Blogging platforms are limited to the lifetime of the company behind the platform
which is often not too long, if the company closes your content will also disappear. If you host your own blog you
have automatically multiple mechanisms to archive your content. In my case I use a public GitHub
&lt;a href="https://github.com/chriskapp/personal-website/blob/main/resources/blog.xml"&gt;repository&lt;/a&gt; where all
posts are stored. Then your blog is probably also covered by &lt;a href="https://archive.org/"&gt;archive.org&lt;/a&gt; which
creates over time a complete backup of your blog. Through your own domain and content you basically participate in the
history of the internet.&lt;/p&gt;

&lt;h2&gt;Data quality&lt;/h2&gt;

&lt;p&gt;Most blogging platforms provide some kind of &lt;abbr title="What You See Is What You Ge"&gt;WYSIWYG&lt;/abbr&gt; editor to write
your content. Such editors often produce ugly HTML and they are also limited. Especially for development content I find
often examples where code examples or code-highlighting are broken. If you want to preserve your content for the long term
it is probably better to write the HTML content by your self, this ensures better data quality and makes it better readable.&lt;/p&gt;

&lt;h2&gt;Decentralization&lt;/h2&gt;

&lt;p&gt;This is an idealistic point but if you host your own blog you help to decentralize the web. You may know
the largest problem of the current web is centralization, this means that there are only some centralized platforms or
so called data silos like Twitter, Facebook etc. where all content gets created. In the end those platforms are dependent
on your content, if you decide to move this content to your own blog you remove some power from those platforms.&lt;/p&gt;

&lt;h2&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;These points have convinced me to start again with my own self-hosted blog. But there are of course also some
disadvantages. The largest disadvantage is probably discoverability, on a central platform you get
automatically readers from the users on the platform, on your own blog you depend on a search engine to find
new readers. In the future it would be cool to have some kind of aggregation service where self-hosted blogs could
register to reach a wider audience, to support those self-hosted blogs.&lt;/p&gt;

&lt;h2&gt;Future&lt;/h2&gt;

&lt;p&gt;Regarding this blog, in the future I will write content around my open source projects and in general development topics like API management,
REST, code generation, specifications and decentralization. I will also go through my old &lt;a href="https://chriskapp.medium.com/"&gt;medium.com&lt;/a&gt;
posts and transfer them to this blog. If you are interested feel free to subscribe to the &lt;a href="https://chrisk.app/feed"&gt;feed&lt;/a&gt;.</content>
  </entry>
</feed>
