Set up continuous integration in Forgejo #75

Open
Vectornaut wants to merge 12 commits from Vectornaut/dyna3:forgejo-ci into main
Member

The branch to be merged adds a continuous integration workflow to the repository, addressing my part of issue #18. It uses the Forgejo Actions framework.

I've started a wiki page to document the continuous integration system. In particular, this page explains how to run continuous integration checks on a development machine, either directly or in a container. The call that does this is simple enough that I think it qualifies as "a script that runs the action steps directly on the development machine." If you agree, I'll mark that part of issue #18 as completed.

The branch to be merged adds a continuous integration workflow to the repository, addressing my part of issue #18. It uses the [Forgejo Actions](https://forgejo.org/docs/next/user/actions/) framework. I've started a [wiki page](https://code.studioinfinity.org/glen/dyna3/wiki/Continuous-integration) to document the continuous integration system. In particular, this page explains how to [run continuous integration checks on a development machine](wiki/Continuous-integration#execution), either directly or in a container. The call that does this is simple enough that I think it qualifies as "a script that runs the action steps directly on the development machine." If you agree, I'll mark that part of issue #18 as completed.
Vectornaut added 4 commits 2025-03-14 00:05:21 +00:00
This workflow is too spendthrift for deployment: every job sets up Rust,
and the `build` job also installs Trunk, which takes even more
resources. The workflow serves well, however, as a proof of concept.
Introducing a syntax error into `src/main.rs` causes the `build` and
`test` jobs to fail, but allows `run-examples` to succeed. Similarly,
introducing a syntax error into `examples/point-on-sphere.rs` causes
`run-examples` to fail, but allows the other jobs to succeed.
Write a Dockerfile that describes a Debian image with our build
environment set up in it. Write a script that builds and tags the custom
image. Set the continuous integration workflow to run on the custom
image, pulled from Docker's local image store.

Running the examples on the custom image takes the same steps as running
them on a development machine, so we no longer gain anything from
factoring out the `cargo run --example` steps. That means we can get rid
of the `run-examples` action. Since the build environment is now set up
ahead of time, we can get rid of the `setup-rust` action too.
Fail continuous integration on warnings
Some checks failed
/ build (pull_request) Failing after 1s
/ test (pull_request) Failing after 1s
/ run-examples (pull_request) Failing after 1s
71f4cd184c
Owner

OK, I finally have a fresh Debian 12 instance I can SSH to. I am really not sure what went wrong the first two times, but going through the "instance create wizard" and basically generating all of the options from scratch worked. As soon as I can I will point fjord.studioinfinity.org to it, and then try to install the runner software per the instructions.

OK, I finally have a fresh Debian 12 instance I can SSH to. I am really not sure what went wrong the first two times, but going through the "instance create wizard" and basically generating all of the options from scratch worked. As soon as I can I will point fjord.studioinfinity.org to it, and then try to install the runner software per the instructions.
Author
Member

Awesome. By the way, did you cancel the continuous integration workflow for this pull request? If not, it seems to have timed out. Maybe registering the runner will cue a retry.

Awesome. By the way, did you cancel the continuous integration workflow for this pull request? If not, it seems to have timed out. Maybe registering the runner will cue a retry.
Owner

OK, this is back in your court: I installed the forgejo-runner on fjord.studioinfinity.org, I registered the runner, and I started it as a daemon. When I did that and went to a previously timed-out job and clicked on the "re-run all jobs" button, they all got picked up by the runner right away, but they also all failed immediately with some docker error I don't understand. If you could try to figure out what is going wrong and how we might address it, that would be great. We may need to do this jointly at tomorrow's meeting, for example. But I think we are getting close! Thanks. In the meantime I will try to arrange that the forgejo-runner daemon on fjord will fire up immediately on reboot.

OK, this is back in your court: I installed the forgejo-runner on fjord.studioinfinity.org, I registered the runner, and I started it as a daemon. When I did that and went to a previously timed-out job and clicked on the "re-run all jobs" button, they all got picked up by the runner right away, but they also all failed immediately with some docker error I don't understand. If you could try to figure out what is going wrong and how we might address it, that would be great. We may need to do this jointly at tomorrow's meeting, for example. But I think we are getting close! Thanks. In the meantime I will try to arrange that the forgejo-runner daemon on fjord will fire up immediately on reboot.
Owner

Ugh. When I followed the instructions to make the runner a system service, and clicked on re-run all jobs again, they went into waiting state again with no progress... I will try to debug.

Ugh. When I followed the instructions to [make the runner a system service](https://forgejo.org/docs/latest/admin/runner-installation/#running-as-a-systemd-service), and clicked on re-run all jobs again, they went into waiting state again with no progress... I will try to debug.
Owner

OK, actually the problem seems worse than that: even after I restart the service and all the jobs run, then when I click "re-run all jobs", they go back into waiting state and sit there waiting until I again restart the service on fjord, when they get picked up. So right now we are in a state of needing to restart the service on fjord every time there are new jobs that need to be run, which is basically useless. After some searching, I have been unable to figure out how we could possibly debug this: the runner shows to be in "Idle" state in the runner list for the StudioInfinity organization, but the jobs just sit there waiting even though there is an idle runner. I can't find any log entries on either side, but maybe I am unclear on where to look...

So I guess if you could look into that as well. In the meantime, I will restart the server one more time so you can go back to seeing the error messages from the job failing. If you make updates here and so have waiting jobs here and want me to log in to restart the forgejo-runner service, please just text me. But of course to merge this PR we need to have the service actually responding to new jobs, so we need to debug that, too.

Thanks! Sorry this isn't smoother. But it pretty closely parallels my experience with drone on gitea, that it was very difficult to get CI jobs to trigger automatically. I don't know why this is so hard... :-(

OK, actually the problem seems worse than that: even after I restart the service and all the jobs run, then when I click "re-run all jobs", they go back into waiting state and sit there waiting until I _again_ restart the service on fjord, when they get picked up. So right now we are in a state of needing to restart the service on fjord _every_ time there are new jobs that need to be run, which is basically useless. After some searching, I have been unable to figure out how we could possibly debug this: the runner shows to be in "Idle" state in the runner list for the StudioInfinity organization, but the jobs just sit there waiting even though there is an idle runner. I can't find any log entries on either side, but maybe I am unclear on where to look... So I guess if you could look into that as well. In the meantime, I will restart the server one more time so you can go back to seeing the error messages from the job failing. If you make updates here and so have waiting jobs here and want me to log in to restart the forgejo-runner service, please just text me. But of course to merge this PR we need to have the service actually responding to new jobs, so we need to debug that, too. Thanks! Sorry this isn't smoother. But it pretty closely parallels my experience with drone on gitea, that it was very difficult to get CI jobs to trigger automatically. I don't know why this is so hard... :-(
Owner

Just one additional detail: either rebooting fjord or logging on to fjord and executing sudo systemctl restart forgejo-runner causes the jobs to be noticed and to run. Other than that, I can't find any way to get the jobs to run after that first batch of pending jobs that are noticed on startup of the runner. :-/ Hopefully you will find some leads.

Just one additional detail: either rebooting fjord or logging on to fjord and executing `sudo systemctl restart forgejo-runner` causes the jobs to be noticed and to run. Other than that, I can't find any way to get the jobs to run after that first batch of pending jobs that are noticed on startup of the runner. :-/ Hopefully you will find some leads.
Author
Member

I've looked at the Docker output from the last workflow failure:

fjord(version:v6.3.1) received task 10 of job 1, be triggered by event: pull_request
workflow prepared
🚀  Start image=dyna3:ci
  🐳  docker pull image=dyna3:ci platform= username= forcePull=false
Error response from daemon: pull access denied for dyna3, repository does not exist or may require 'docker login': denied: requested access to the resource is denied

The error message suggests to me that the CI image, dyna3:ci, either isn't in Docker's image store or isn't accessible from the runner's user account. Here are some troubleshooting ideas.

  1. Since I never put instructions for running CI remotely on the wiki, could you confirm that you did the "Install Docker and build the continuous integration container image" steps? These steps are required in the remote case—as you probably figured out already.

  2. To check whether the CI image is in Docker's image store, could you call docker image from the account you used to run the tools/ci-image/build script and tell me about the output? On my machine, the output line describing the CI image looks like this:

    REPOSITORY                     TAG                  IMAGE ID       CREATED        SIZE
    dyna3                          ci                   4e45a57dfb6f   2 weeks ago    1.15GB
    
  3. To help check whether the CI image is accessible from the runner's user account, could you call docker image again from that account? I'm not sure the results would depend on the runner account's permissions, but I figure it's worth a try.

I've looked at the Docker output from the last workflow failure: ``` fjord(version:v6.3.1) received task 10 of job 1, be triggered by event: pull_request workflow prepared 🚀 Start image=dyna3:ci 🐳 docker pull image=dyna3:ci platform= username= forcePull=false Error response from daemon: pull access denied for dyna3, repository does not exist or may require 'docker login': denied: requested access to the resource is denied ``` The error message suggests to me that the CI image, `dyna3:ci`, either isn't in Docker's image store or isn't accessible from the runner's user account. Here are some troubleshooting ideas. 1. Since I never put instructions for running CI remotely on the wiki, could you confirm that you did the "Install Docker and build the continuous integration container image" steps? These steps are required in the remote case—as you probably figured out already. 2. To check whether the CI image is in Docker's image store, could you call `docker image` from the account you used to run the `tools/ci-image/build` script and tell me about the output? On my machine, the output line describing the CI image looks like this: ``` REPOSITORY TAG IMAGE ID CREATED SIZE dyna3 ci 4e45a57dfb6f 2 weeks ago 1.15GB ``` 3. To help check whether the CI image is accessible from the runner's user account, could you call `docker image` again from that account? I'm not sure the results would depend on the runner account's permissions, but I figure it's worth a try.
Owner

@Vectornaut wrote in #75 (comment):

  1. Since I never put instructions for running CI remotely on the wiki, could you confirm that you did the "Install Docker and build the continuous integration container image" steps? These steps are required in the remote case—as you probably figured out already.

I did install Docker on fjord per the forgejo-runner installation instructions, and add the runner user to the Docker group. However, I am not clear on what really is meant by "build the continuous integration container image" step -- continuous integration, more or less by definition, has to run entirely from the repository contents. If the source files in the repository define a container image that will then be used in testing, a PR that changes that definition has to be tested on checkin, so that image would have to be rebuilt. If there is no way for the CI workflow to have a step that builds an image (if necessary) to be used in later steps, then we will need some other strategy. For example, in a clearly defined and labeled "derived files" area of the repo, we could have the container image as a file in the repo, with warnings that it is not to be generated manually, and then implement a commit hook that checks whether that image is up to date with its source files as of that commit, and if not, re-generates the container image to be included in the commit. That way, every commit will always include an up-to-date container image (presuming that the container image can be used directly from the repo). Or I suppose there are other possible architectures here. But it can't be the case that there is a step that needs to be performed manually on fjord that depends on contents of the repo that might be changed in a PR. (A mild exception would be a one-time bootstrapping step, with the property that somehow the container image would update itself and replace itself and the new image be used in its place for all later steps, any time there was a check-in to the repo that could affect the image.)

Thanks for letting me know how to proceed here in light of these considerations.

  • To check whether the CI image is in Docker's image store, could you call docker image from the account you used to run the tools/ci-image/build script and tell me about the output? On my machine, the output line describing the CI image looks like this:
    REPOSITORY                     TAG                  IMAGE ID       CREATED        SIZE
    dyna3                          ci                   4e45a57dfb6f   2 weeks ago    1.15GB
    

I can't imagine that there are any images in Docker's image store on fjord, other than the "hello world" image used for verification that Docker is running.

And so I think that point 3 is also moot at the moment.

@Vectornaut wrote in https://code.studioinfinity.org/StudioInfinity/dyna3/pulls/75#issuecomment-2433: > 1. Since I never put instructions for running CI remotely on the wiki, could you confirm that you did the "Install Docker and build the continuous integration container image" steps? These steps are required in the remote case—as you probably figured out already. I did install Docker on fjord per the forgejo-runner installation instructions, and add the `runner` user to the Docker group. However, I am not clear on what really is meant by "build the continuous integration container image" step -- continuous integration, more or less by definition, has to run entirely from the repository contents. If the source files in the repository define a container image that will then be used in testing, a PR that changes that definition has to be tested on checkin, so that image would have to be rebuilt. If there is no way for the CI workflow to have a step that builds an image (if necessary) to be used in later steps, then we will need some other strategy. For example, in a clearly defined and labeled "derived files" area of the repo, we could have the container image as a file in the repo, with warnings that it is not to be generated manually, and then implement a commit hook that checks whether that image is up to date with its source files as of that commit, and if not, re-generates the container image to be included in the commit. That way, every commit will always include an up-to-date container image (presuming that the container image can be used directly from the repo). Or I suppose there are other possible architectures here. But it can't be the case that there is a step that needs to be performed manually on fjord that depends on contents of the repo that might be changed in a PR. (A mild exception would be a one-time bootstrapping step, with the property that somehow the container image would update itself and replace itself and the new image be used in its place for all later steps, any time there was a check-in to the repo that could affect the image.) Thanks for letting me know how to proceed here in light of these considerations. > > * To check whether the CI image is in Docker's image store, could you call `docker image` from the account you used to run the `tools/ci-image/build` script and tell me about the output? On my machine, the output line describing the CI image looks like this: > ```text > REPOSITORY TAG IMAGE ID CREATED SIZE > dyna3 ci 4e45a57dfb6f 2 weeks ago 1.15GB > ``` I can't imagine that there are any images in Docker's image store on fjord, other than the "hello world" image used for verification that Docker is running. And so I think that point 3 is also moot at the moment.
Owner

I suppose another possible strategy would be to have CI workflows that depend only on externally maintained publicly available container images and build up whatever infrastructure we need on every workflow run.

I suppose another possible strategy would be to have CI workflows that depend only on externally maintained publicly available container images and build up whatever infrastructure we need on every workflow run.
Author
Member

[…] continuous integration, more or less by definition, has to run entirely from the repository contents

Good point; it would be nice for the repository to completely specify the build environment.

I suppose another possible strategy would be to have CI workflows that depend only on externally maintained publicly available container images and build up whatever infrastructure we need on every workflow run.

Yes, that's the easiest approach I can think of. It's used in bbfdf2b, which sets up the build environment as the first step in the CI process. This does have the downside of making the CI job substantially slower and more resource-intensive, as described in the commit message. We could make things a little more efficient by running on the rust:1.85-slim-bookworm base image instead of Forgejo Runner's default Debian image.

> […] continuous integration, more or less by definition, has to run entirely from the repository contents Good point; it would be nice for the repository to completely specify the build environment. > I suppose another possible strategy would be to have CI workflows that depend only on externally maintained publicly available container images and build up whatever infrastructure we need on every workflow run. Yes, that's the easiest approach I can think of. It's used in bbfdf2b, which sets up the build environment as the first step in the CI process. This does have the downside of making the CI job substantially slower and more resource-intensive, as described in the commit message. We could make things a little more efficient by running on the [`rust:1.85-slim-bookworm`](https://hub.docker.com/layers/library/rust/1.85-slim-bookworm/images/sha256-98c3b8e2721a704bc2eb62b98ffde90aa679060beab5c0261fc9fb3e53dcb96c) base image instead of Forgejo Runner's default Debian image.
Author
Member

I have two options ready to push. Which one would you like me to try?

Install Trunk during continuous integration

On my development machine, this adds about 3.5 minutes to the CI workflow, which runs in 0.5 minutes on an image where Trunk is already installed.

In principle, it should be possible to save the Trunk installation between runs using the cache action. However, the cache action fails with a mysterious "Invalid URL" error when I test it on my development machine using forgejo-runner exec.

Break isolation to build the continuous integration image

⚠️ Pushing this option will (hopefully) reconfigure the Docker image store on fjord!

A job with the runs-on: self-hosted label runs directly on the host, with no isolation. A job with this label can therefore modify the host's Docker image store. The setup-image job shown below builds and tags the CI image described by the Dockerfile in tools/ci-image on the branch where CI is running.

setup-image:
  runs-on: self-hosted
  defaults:
    run:
      working-directory: tools/ci-image
  steps:
    - uses: https://code.forgejo.org/actions/checkout@v4
    - run: docker build . --tag dyna3:ci

By making the other CI jobs depend on setup-image and giving them the runs-on: dyna3:ci label, we can make sure that they always run on the CI image specified by the branch where CI is running.

I've verified that this approach works on my development machine by removing the dyna3:ci image and then calling forgejo-runner exec --image -self-hosted from the top level of the Dyna3 repository. As expected, the setup-image job rebuilds the dyna3:ci image, making it available to the rest of the workflow.

I have two options ready to push. Which one would you like me to try? #### Install Trunk during continuous integration On my development machine, this adds about 3.5 minutes to the CI workflow, which runs in 0.5 minutes on an image where Trunk is already installed. In principle, it should be possible to save the Trunk installation between runs using the [cache action](https://forgejo.org/docs/next/user/actions/#cache). However, the cache action fails with a mysterious "Invalid URL" error when I test it on my development machine using `forgejo-runner exec`. #### Break isolation to build the continuous integration image ⚠️ *Pushing this option will (hopefully) reconfigure the Docker image store on `fjord`!* A job with the [`runs-on: self-hosted`](https://forgejo.org/docs/next/user/actions/#jobsjob_idruns-on) label runs directly on the host, with no isolation. A job with this label can therefore modify the host's Docker image store. The `setup-image` job shown below builds and tags the CI image described by the Dockerfile in `tools/ci-image` on the branch where CI is running. ```yaml setup-image: runs-on: self-hosted defaults: run: working-directory: tools/ci-image steps: - uses: https://code.forgejo.org/actions/checkout@v4 - run: docker build . --tag dyna3:ci ``` By making the other CI jobs depend on `setup-image` and giving them the `runs-on: dyna3:ci` label, we can make sure that they always run on the CI image specified by the branch where CI is running. I've verified that this approach works on my development machine by removing the `dyna3:ci` image and then calling `forgejo-runner exec --image -self-hosted` from the top level of the Dyna3 repository. As expected, the `setup-image` job rebuilds the `dyna3:ci` image, making it available to the rest of the workflow.
Owner

And what's the additional time required by the isolation-breaking one, as compared to when you had the image pre-created? Is the self-hosted step quicker the second time, or does it always fully re-run?

In the "install trunk" version, does it install three times? If so, would it be fair to guess that it would only add 1.2 minutes if you collapsed all three tests into one job?

Finally, if we are using this locally as the way to run tests during development, we definitely want that as fast as possible, i.e. every step runs directly on my machine not through docker, relying on me to have trunk installed locally already, etc. Is that how it's working now (in one or the other of these possibilities)? If not, which one is easier to adapt to working that way?

Thanks for clarifying.

And what's the additional time required by the isolation-breaking one, as compared to when you had the image pre-created? Is the self-hosted step quicker the second time, or does it always fully re-run? In the "install trunk" version, does it install three times? If so, would it be fair to guess that it would only add 1.2 minutes if you collapsed all three tests into one job? Finally, if we are using this locally as the way to run tests during development, we definitely want that as fast as possible, i.e. every step runs directly on my machine not through docker, relying on me to have trunk installed locally already, etc. Is that how it's working now (in one or the other of these possibilities)? If not, which one is easier to adapt to working that way? Thanks for clarifying.
Author
Member

And what's the additional time required by the isolation-breaking one, as compared to when you had the image pre-created? Is the self-hosted step quicker the second time, or does it always fully re-run?

Once Docker has built the CI image, a repeat build only takes about 0.05 seconds on my machine. Thus, after after the first run, the workflow that tries to rebuild the image takes only a fraction of a second longer than the workflow that expects a pre-built image.

In the "install trunk" version, does it install three times? If so, would it be fair to guess that it would only add 1.2 minutes if you collapsed all three tests into one job?

No, it only installs once. I realized after our meeting that only the build job requires extra setup on top of the 1.85-slim-bookworm. The 3.5 minutes is for a single installation of Trunk.

> And what's the additional time required by the isolation-breaking one, as compared to when you had the image pre-created? Is the self-hosted step quicker the second time, or does it always fully re-run? Once Docker has built the CI image, a repeat build only takes about 0.05 seconds on my machine. Thus, after after the first run, the workflow that tries to rebuild the image takes only a fraction of a second longer than the workflow that expects a pre-built image. > In the "install trunk" version, does it install three times? If so, would it be fair to guess that it would only add 1.2 minutes if you collapsed all three tests into one job? No, it only installs once. I realized after our meeting that only the `build` job requires extra setup on top of the [`1.85-slim-bookworm`](https://hub.docker.com/layers/library/rust/1.85-slim-bookworm/images/sha256-98c3b8e2721a704bc2eb62b98ffde90aa679060beab5c0261fc9fb3e53dcb96c). The 3.5 minutes is for a single installation of Trunk.
Author
Member

Finally, if we are using this locally as the way to run tests during development, we definitely want that as fast as possible, i.e. every step runs directly on my machine not through docker, relying on me to have trunk installed locally already, etc. Is that how it's working now (in one or the other of these possibilities)? If not, which one is easier to adapt to working that way?

I think there was a mistake in my procedure for running CI directly on your development system (details below). Thus, I'll need to design a new procedure regardless of which option we pick. I think that will be equally annoying for both options.

Details of mistake in direct run procedure

When I follow the procedure for "Running directly on your development system" on commit 71f4cd1 (the commit I had in mind when I wrote the procedure), each job still runs in a Docker container created from the dyna3:ci image. I think this is because the container.image: dyna3:ci label overrides the --image -self-hosted flag passed to forgejo-runner exec. I may have written that procedure while looking at commit bbfdf2b, which doesn't say which image each job should run on, and not tested it properly on later commits.

> Finally, if we are using this locally as the way to run tests during development, we definitely want that as fast as possible, i.e. every step runs directly on my machine not through docker, relying on me to have trunk installed locally already, etc. Is that how it's working now (in one or the other of these possibilities)? If not, which one is easier to adapt to working that way? I think there was a mistake in my procedure for running CI directly on your development system (details below). Thus, I'll need to design a new procedure regardless of which option we pick. I think that will be equally annoying for both options. #### Details of mistake in direct run procedure When I follow the [procedure](https://code.studioinfinity.org/StudioInfinity/dyna3/wiki/Continuous-integration#running-directly-on-your-development-system) for "Running directly on your development system" on commit 71f4cd1 (the commit I had in mind when I wrote the procedure), each job still runs in a Docker container created from the `dyna3:ci` image. I think this is because the `container.image: dyna3:ci` label overrides the `--image -self-hosted` flag passed to `forgejo-runner exec`. I may have written that procedure while looking at commit `bbfdf2b`, which doesn't say which image each job should run on, and not tested it properly on later commits.
Owner

OK in light of all the above, the question is how annoying will a 4 minute wait for automated runs be? Since our s.o.p. should be to always run the tests locally before committing (in fact, I would be happy with a commit hook to that effect), the CI tests are generally a formality. Hence I think the "install trunk on top of a rust image" would be the way to go. On the other hand, if you think the 4 minute wait would be detrimental to your workflow, go ahead and check in the "make custom image locally" as long as you are sure it will auto-rebuild when anything that the image depends on is updated in the repo.

And yes either way please get the "local testing" working. It's OK with me if the local testing doesn't go through forgejo-runner; perhaps the forgejo-runner jobs invoke the local testing script once the container is up and running, for example. But of course either way keep the testing as DRY as possible.

OK in light of all the above, the question is how annoying will a 4 minute wait for automated runs be? Since our s.o.p. should be to always run the tests locally before committing (in fact, I would be happy with a commit hook to that effect), the CI tests are generally a formality. Hence I think the "install trunk on top of a rust image" would be the way to go. On the other hand, if you think the 4 minute wait would be detrimental to your workflow, go ahead and check in the "make custom image locally" as long as you are sure it will auto-rebuild when anything that the image depends on is updated in the repo. And yes either way please get the "local testing" working. It's OK with me if the local testing doesn't go through forgejo-runner; perhaps the forgejo-runner jobs invoke the local testing script once the container is up and running, for example. But of course either way keep the testing as DRY as possible.
Author
Member

OK in light of all the above, the question is how annoying will a 4 minute wait for automated runs be? Since our s.o.p. should be to always run the tests locally before committing (in fact, I would be happy with a commit hook to that effect), the CI tests are generally a formality.

I agree. Moreover, I'd prefer to always run the containerized CI before committing, with an optional direct run for extra environment diversity. In light of that, I think it would be useful to make the containerized CI run quickly.

> OK in light of all the above, the question is how annoying will a 4 minute wait for automated runs be? Since our s.o.p. should be to always run the tests locally before committing (in fact, I would be happy with a commit hook to that effect), the CI tests are generally a formality. I agree. Moreover, I'd prefer to always run the containerized CI before committing, with an optional direct run for extra environment diversity. In light of that, I think it would be useful to make the containerized CI run quickly.
Owner

P.S. Most projects have a test harness, and typically on your own development machine, you just invoke the test harness directly, and the CI workflow specification also invokes the test harness. I think we don't quite have a unified harness for all tests; I think some run with cargo test and others run by invoking a shell script. So you could (A) create our own custom shell script in a "tools" directory, which with no command-line arguments runs "all the tests" or it runs the tests "named" on the command line (the latter feature in case we want to break up the CI workflow into different jobs for different tests). Or (B) try a language-agnostic test harness like BATS or mdtest. The latter does introduce a dependency on Python, but Python is so ubiquitous these days I am not too concerned, and it seems reasonably possible we will have a Python dependency for the doc system anyway (please add an issue for setting up a doc system that colocates documentation with code, I was going to link it here but couldn't find one).

P.S. Most projects have a test harness, and typically on your own development machine, you just invoke the test harness directly, and the CI workflow specification also invokes the test harness. I think we don't quite have a unified harness for all tests; I think some run with `cargo test` and others run by invoking a shell script. So you could (A) create our own custom shell script in a "tools" directory, which with no command-line arguments runs "all the tests" or it runs the tests "named" on the command line (the latter feature in case we want to break up the CI workflow into different jobs for different tests). Or (B) try a language-agnostic test harness like [BATS](zhttps://github.com/bats-core/bats-core) or [mdtest](https://github.com/elucent/mdtest). The latter does introduce a dependency on Python, but Python is so ubiquitous these days I am not too concerned, and it seems reasonably possible we will have a Python dependency for the doc system anyway (please add an issue for setting up a doc system that colocates documentation with code, I was going to link it here but couldn't find one).
Author
Member

I think we don't quite have a unified harness for all tests; I think some run with cargo test and others run by invoking a shell script.

Calling cargo test should run all of our automated tests. When you say that some tests run by invoking a shell script, are you talking about the build and run-examples jobs in the CI workflow, which verify that the application builds without warnings or errors and that the example code builds and runs without warnings or errors?

> I think we don't quite have a unified harness for all tests; I think some run with cargo test and others run by invoking a shell script. Calling `cargo test` should run all of our automated tests. When you say that some tests run by invoking a shell script, are you talking about the `build` and `run-examples` jobs in the CI workflow, which verify that the application builds without warnings or errors and that the example code builds and runs without warnings or errors?
Owner

Hmm crossed with my ps. I think your comment is urging toward the "make custom image locally" alternative. As mentioned, I am fine with that as long as (a) I can easily run tests uncontainerized locally as well, and (b) the custom image auto-rebuilds as needed.

Hmm crossed with my ps. I think your comment is urging toward the "make custom image locally" alternative. As mentioned, I am fine with that as long as (a) I can easily run tests uncontainerized locally as well, and (b) the custom image auto-rebuilds as needed.
Owner

@Vectornaut wrote in #75 (comment):

I think we don't quite have a unified harness for all tests; I think some run with cargo test and others run by invoking a shell script.

Calling cargo test should run all of our automated tests. When you say that some tests run by invoking a shell script, are you talking about the build and run-examples jobs in the CI workflow, which verify that the application builds without warnings or errors and that the example code builds and runs without warnings or errors?

Yes, those are tests too!

@Vectornaut wrote in https://code.studioinfinity.org/StudioInfinity/dyna3/pulls/75#issuecomment-2444: > > I think we don't quite have a unified harness for all tests; I think some run with cargo test and others run by invoking a shell script. > > Calling `cargo test` should run all of our automated tests. When you say that some tests run by invoking a shell script, are you talking about the `build` and `run-examples` jobs in the CI workflow, which verify that the application builds without warnings or errors and that the example code builds and runs without warnings or errors? Yes, those are tests too!
Owner

And at some point we might want to beef up the run-examples test with checking the output, if that's a sensible thing.

And at some point we might want to beef up the run-examples test with checking the output, if that's a sensible thing.
Author
Member

I've filed an issue about adding a convenient way to do all three continuous integration checks in one command. If we added that, would it fulfill the requirement that we should be able to run continuous integration directly on a development system? The checks wouldn't be running in Forgejo Runner, but you could argue that this just removes another layer of indirection between the checks and the development system.

If we can get all of the CI checks to run as separate tests, maybe it wouldn't be so bad to lump them together into a single job during the CI workflow. The workflow output would still show which tests failed, so we could still get the same level of granularity if we looked for it; it just wouldn't be broken down as neatly in the Forgejo interface.

I've filed an [issue](https://code.studioinfinity.org/StudioInfinity/dyna3/issues/76) about adding a convenient way to do all three continuous integration checks in one command. If we added that, would it fulfill the requirement that we should be able to run continuous integration directly on a development system? The checks wouldn't be running in Forgejo Runner, but you could argue that this just removes another layer of indirection between the checks and the development system. If we can get all of the CI checks to run as separate tests, maybe it wouldn't be so bad to lump them together into a single job during the CI workflow. The workflow output would still show which tests failed, so we could still get the same level of granularity if we looked for it; it just wouldn't be broken down as neatly in the Forgejo interface.
Owner

I agree, when running everything locally in development, not running through forgejo-runner is definitely not a negative, and perhaps a mild positive.

I am also completely fine with all of these checks running as a single CI job if when clicking on the failed job in the forgejo UI it's easy to see what specific thing actually failed. From what you say, I imagine it will be.

Finally, if the command line to run all tests gets too long/tricky to remember and/or type, I respectfully request a tools/test.sh (or similarly easy-to-find-and-use script, just not at the very top level of the project directory structure) that just executes that command line. Or if cargo allows the definition of auto-discovering custom subcommands (as e.g. npm does), it would also be fine to arrange that cargo tests or cargo alltests or some such run all the tests, as opposed to having a script. As for the boundary of what is too long/tricky, I think cargo test --examples --feature runExamples or something like that is already just slightly past that boundary, hence this comment.

I agree, when running everything locally in development, _not_ running through forgejo-runner is definitely not a negative, and perhaps a mild positive. I am also completely fine with all of these checks running as a single CI job if when clicking on the failed job in the forgejo UI it's easy to see what specific thing actually failed. From what you say, I imagine it will be. Finally, if the command line to run all tests gets too long/tricky to remember and/or type, I respectfully request a `tools/test.sh` (or similarly easy-to-find-and-use script, just not at the very top level of the project directory structure) that just executes that command line. Or if cargo allows the definition of auto-discovering custom subcommands (as e.g. `npm` does), it would also be fine to arrange that `cargo tests` or `cargo alltests` or some such run all the tests, as opposed to having a script. As for the boundary of what is too long/tricky, I think `cargo test --examples --feature runExamples` or something like that is already just slightly past that boundary, hence this comment.
Vectornaut added 2 commits 2025-03-27 05:06:24 +00:00
Remove the CI image setup script
Some checks failed
/ setup-image (pull_request) Failing after 23m20s
/ build (pull_request) Has been cancelled
/ test (pull_request) Has been cancelled
/ run-examples (pull_request) Has been cancelled
15375dc932
The CI workflow now builds and tags the CI image automatically, so we
don't need the manual setup script anymore.
Author
Member

I just pushed a commit with the image setup job (15375dc). If it runs as expected, I'll start working on using Cargo to run CI directly on a development machine, as discussed above. Right now, the workflow is shown as "waiting to run"; maybe it needs to be poked by restarting the Forgejo Runner service again?

I just pushed a commit with the image setup job (15375dc). If it runs as expected, I'll start working on using Cargo to run CI directly on a development machine, as discussed above. Right now, the workflow is shown as "waiting to run"; maybe it needs to be poked by restarting the Forgejo Runner service again?
Owner

Unfortunately, no combination of restarting the forgejo-runner service on fjord, canceling and re-running the setup-image job, or rebooting fjord has managed to budge the setup-image job from its "Waiting to run" status. Ideas? Are we sure that "self-hosted" is supported on a remote host? I will try de-registering and re-registering the fjord runner.

Unfortunately, no combination of restarting the forgejo-runner service on fjord, canceling and re-running the setup-image job, or rebooting fjord has managed to budge the setup-image job from its "Waiting to run" status. Ideas? Are we sure that "self-hosted" is supported on a remote host? I will try de-registering and re-registering the fjord runner.
Owner

Deleting and re-registering the forgejo runner didn't help either. I didn't see anything in the config file that seemed to be about authorizing self-hosted jobs. But I did set the log level to debug. But here is the entire output of journalctl -u forgejo-runner since the last restart with all of these changes in place:

Mar 27 15:23:51 ip-172-31-1-77 forgejo-runner[1296]: time="2025-03-27T15:23:51Z" level=info msg="log level changed to debug" func="[initLogging]" file="[daemon.go:168]"
Mar 27 15:23:51 ip-172-31-1-77 forgejo-runner[1296]: time="2025-03-27T15:23:51Z" level=info msg="Starting runner daemon" func="[func6]" file="[daemon.go:38]"
Mar 27 15:23:51 ip-172-31-1-77 forgejo-runner[1296]: time="2025-03-27T15:23:51Z" level=debug msg="gc: 2025-03-27 15:23:51.130483018 +0000 UTC m=+0.008136815" func="[gcCache]" file="[handler.go:515]" module=artifactcache
Mar 27 15:23:51 ip-172-31-1-77 forgejo-runner[1296]: time="2025-03-27T15:23:51Z" level=info msg="runner: fjord4, with version: v6.3.1, with labels: [docker], declared successfully" func="[func6]" file="[daemon.go:108]"
Mar 27 15:23:51 ip-172-31-1-77 forgejo-runner[1296]: time="2025-03-27T15:23:51Z" level=info msg="[poller 0] launched" func="[poll]" file="[poller.go:101]"

I don't see anything useful there... I am at a dead end. Any ideas?

Deleting and re-registering the forgejo runner didn't help either. I didn't see anything in the config file that seemed to be about authorizing self-hosted jobs. But I did set the log level to debug. But here is the entire output of `journalctl -u forgejo-runner` since the last restart with all of these changes in place: ``` Mar 27 15:23:51 ip-172-31-1-77 forgejo-runner[1296]: time="2025-03-27T15:23:51Z" level=info msg="log level changed to debug" func="[initLogging]" file="[daemon.go:168]" Mar 27 15:23:51 ip-172-31-1-77 forgejo-runner[1296]: time="2025-03-27T15:23:51Z" level=info msg="Starting runner daemon" func="[func6]" file="[daemon.go:38]" Mar 27 15:23:51 ip-172-31-1-77 forgejo-runner[1296]: time="2025-03-27T15:23:51Z" level=debug msg="gc: 2025-03-27 15:23:51.130483018 +0000 UTC m=+0.008136815" func="[gcCache]" file="[handler.go:515]" module=artifactcache Mar 27 15:23:51 ip-172-31-1-77 forgejo-runner[1296]: time="2025-03-27T15:23:51Z" level=info msg="runner: fjord4, with version: v6.3.1, with labels: [docker], declared successfully" func="[func6]" file="[daemon.go:108]" Mar 27 15:23:51 ip-172-31-1-77 forgejo-runner[1296]: time="2025-03-27T15:23:51Z" level=info msg="[poller 0] launched" func="[poll]" file="[poller.go:101]" ``` I don't see anything useful there... I am at a dead end. Any ideas?
Author
Member

Which labels did you choose when you registered the runner? These should be listed under the "labels" key in the registration result file, which is called .runner by default. If you didn't choose any labels, that might explain the current behavior.

A hypothesis about what's going on

This is based on what I've read about the runner registration process and the interpretation of runner labels.

When Forgejo Actions is distributing jobs to runners, it figures out whether a given runner can handle a given job by comparing the "labels" key in the runner's registration result file with the runs-on key in the job description. If you don't choose any labels during registration, the "labels" key holds only the default label: docker:docker://node:20-bullseye. This tells Forgejo Actions that the runner can handle jobs with the runs-on: docker label, and it will run those jobs on the Docker image called node:20-bullseye unless the job specifies otherwise.

I suspect that when Forgejo tries to run the setup-image job, it looks for a runner with label name self-hosted. If our runner only has labels with label name docker, Forgejo Actions will conclude that it can't handle the setup-image job. The other jobs are then blocked from running, because they depend on setup-image.

Testing the hypothesis

Please let me know whether you can find these files, and post their contents of the ones you can find:

  • config.yml
  • .runner

I'm guessing they're supposed to be in the runner user's home directory.

A possible solution

Re-register the runner, choosing the following labels:

self-hosted:host://-self-hosted,docker:docker://debian:bookworm-slim

This should tell Forgejo that:

  • Jobs with the label runs-on: self-hosted should run in a shell directly on the host, unless they specify a different image.
  • Jobs with the label runs-on: docker should run in a Docker container created from the debian:bookworm-slim image, unless they specify a different image.
    • To make sure we're always specifying the CI image in our job descriptions, I've chosen a default image that should be unable to support our containerized CI jobs.
    • This label is based on the default runner label described here.
Which labels did you choose when you registered the runner? These should be listed under the `"labels"` key in the registration result file, which is called `.runner` by default. If you didn't choose any labels, that might explain the current behavior. #### A hypothesis about what's going on This is based on what I've read about [the runner registration process](https://forgejo.org/docs/next/admin/runner-installation/#standard-registration) and [the interpretation of runner labels](https://forgejo.org/docs/next/admin/actions/#choosing-labels). When Forgejo Actions is distributing jobs to runners, it figures out whether a given runner can handle a given job by comparing the `"labels"` key in the runner's registration result file with the `runs-on` key in the job description. If you don't choose any labels during registration, the `"labels"` key holds only the default label: `docker:docker://node:20-bullseye`. This tells Forgejo Actions that the runner can handle jobs with the `runs-on: docker` label, and it will run those jobs on the Docker image called `node:20-bullseye` unless the job specifies otherwise. I suspect that when Forgejo tries to run the `setup-image` job, it looks for a runner with label name `self-hosted`. If our runner only has labels with label name `docker`, Forgejo Actions will conclude that it can't handle the `setup-image` job. The other jobs are then blocked from running, because they depend on `setup-image`. #### Testing the hypothesis Please let me know whether you can find these files, and post their contents of the ones you can find: - `config.yml` - `.runner` I'm guessing they're supposed to be in the runner user's home directory. #### A possible solution Re-register the runner, choosing the following labels: ``` self-hosted:host://-self-hosted,docker:docker://debian:bookworm-slim ``` This should tell Forgejo that: - Jobs with the label `runs-on: self-hosted` should run in a shell directly on the host, unless they specify a different image. - This label is taken from the [runner label documentation](https://forgejo.org/docs/next/admin/actions/#host). - Jobs with the label `runs-on: docker` should run in a Docker container created from the `debian:bookworm-slim` image, unless they specify a different image. - To make sure we're always specifying the CI image in our job descriptions, I've chosen a default image that should be unable to support our containerized CI jobs. - This label is based on the default runner label described [here](https://forgejo.org/docs/next/admin/runner-installation/#standard-registration).
Owner

Every time I registered, I used the default labels, since that's what the example did and there was no discussion of when/why you might want different labels. I will unregister the runner and re-register it with the suggested labels. Thanks.

Every time I registered, I used the default labels, since that's what the example did and there was no discussion of when/why you might want different labels. I will unregister the runner and re-register it with the suggested labels. Thanks.
Owner

OK, it started! And then promptly failed with Cannot find: node in PATH. Is this referring to the JavaScript runtime node? Do we somehow depend on JavaScript? Or is it something else? You can see the transcript by clicking on the Details link of the failure, below.

OK, it started! And then promptly failed with `Cannot find: node in PATH`. Is this referring to the JavaScript runtime `node`? Do we somehow depend on JavaScript? Or is it something else? You can see the transcript by clicking on the `Details` link of the failure, below.
Author
Member

OK, it started! And then promptly failed with Cannot find: node in PATH. Is this referring to the JavaScript runtime node? Do we somehow depend on JavaScript? Or is it something else? You can see the transcript by clicking on the Details link of the failure, below.

Nice! The setup-image job checks out the repository with Forgejo's built-in checkout action, which does indeed seem to be distributed as JavaScript. Would you be willing to install Node alongside Docker on the runner host?

The checkout action runs fine on my machine, which has Node 21.7.3 installed. It also runs fine on the default node:20-bullseye Docker image, which presumably has Node 20. Confusingly, it also runs fine on the dyna3:ci image, which does not have Node installed. I'm looking into how that's possible.

> OK, it started! And then promptly failed with `Cannot find: node in PATH`. Is this referring to the JavaScript runtime `node`? Do we somehow depend on JavaScript? Or is it something else? You can see the transcript by clicking on the `Details` link of the failure, below. Nice! The `setup-image` job checks out the repository with Forgejo's built-in [`checkout`](https://code.forgejo.org/actions/checkout/) action, which does indeed seem to be distributed as JavaScript. Would you be willing to install Node alongside Docker on the runner host? The `checkout` action runs fine on my machine, which has Node 21.7.3 installed. It also runs fine on the default `node:20-bullseye` Docker image, which presumably has Node 20. Confusingly, it *also* runs fine on the `dyna3:ci` image, which does *not* have Node installed. I'm looking into how that's possible.
Author
Member

I've confirmed that the checkout action uses Node 20 by looking at its action.yml. I'm still not sure how it's able to run in a container that doesn't have Node installed. Maybe calling checkout from inside a container causes it to run in its own container? The Gitea Actions documentation sort of suggests that some actions can do this… although it also seems to say that checkout doesn't.

I've confirmed that the `checkout` action uses Node 20 by looking at its [`action.yml`](https://code.forgejo.org/actions/checkout/src/branch/main/action.yml). I'm still not sure how it's able to run in a container that doesn't have Node installed. Maybe calling `checkout` from inside a container causes it to run in its own container? The Gitea Actions documentation sort of [suggests](https://docs.gitea.com/usage/actions/design#connection-3-act-runner-to-internet) that some actions can do this… although it also seems to say that `checkout` doesn't.
Vectornaut added 1 commit 2025-03-28 01:28:59 +00:00
Tell Docker to build Trunk with only one job
Some checks failed
/ setup-image (pull_request) Failing after 23m14s
/ build (pull_request) Has been cancelled
/ test (pull_request) Has been cancelled
/ run-examples (pull_request) Has been cancelled
f212cfadf4
This might reduce the amount of memory it takes to build the CI image.
Author
Member

Now that we've installed Node 20 on the runner host, the setup-image job can run! Unfortunately, building the CI image seems to crash the host by eating up its entire 2GB of RAM.

I just pushed a commit that might reduce the build's memory requirements by telling Cargo to build Trunk with only one job. I'm not optimistic about this approach, because it doesn't actually seem to reduce the build's peak memory use on my machine. If the cost of rebooting the runner host is low, though, maybe it's worth a try?

On my machine, the single-job build takes between 5 and 10 minutes.

Now that we've installed Node 20 on the runner host, the `setup-image` job can run! Unfortunately, building the CI image seems to crash the host by eating up its entire 2GB of RAM. I just pushed a commit that might reduce the build's memory requirements by telling Cargo to build Trunk with only one job. I'm not optimistic about this approach, because it doesn't actually seem to reduce the build's peak memory use on my machine. If the cost of rebooting the runner host is low, though, maybe it's worth a try? On my machine, the single-job build takes between 5 and 10 minutes.
Owner

Restarted fjord, the job got 511 seconds and then the machine hung again. I will try upgrading the instance to t3a.medium and we can see how that goes. Then I can read up on any ways to get value from the contract I bought. It wasn't a reserved instance, it was some other plan. That's one of the things I dislike about AWS, they constantly have different pricing plans and schemes and deals, and it is hard to figure out/keep track of.

Restarted fjord, the job got 511 seconds and then the machine hung again. I will try upgrading the instance to t3a.medium and we can see how that goes. Then I can read up on any ways to get value from the contract I bought. It wasn't a reserved instance, it was some other plan. That's one of the things I _dislike_ about AWS, they constantly have different pricing plans and schemes and deals, and it is hard to figure out/keep track of.
Vectornaut added 1 commit 2025-03-28 03:46:14 +00:00
Run CI on a stock image, installing Trunk by hand
Some checks failed
/ build (pull_request) Failing after 4s
/ test (pull_request) Failing after 3s
/ run-examples (pull_request) Failing after 4s
4251242953
Instead of building a custom CI image with Trunk pre-installed, let's
try running CI on a stock image, installing Trunk from a binary on every
workflow run.
Author
Member

Before you upgrade to a t3a.medium instance, let's try something completely different. The latest commit (4251242953) runs CI on a stock Docker image, getting rid of the setup-image step. On every workflow run, the build step installs Trunk by hand by downloading a binary from the release page and unpacking it into Cargo's binaries folder.

This version of the workflow runs entirely in Docker. If we like it, we can uninstall Node and re-register the runner with only the default label.

Right now, the workflow will only run on x86_64 machines, because I've hard-coded that choice of binary into the workflow. If we like this approach, we can probably make the setup-trunk action find out the host architecture and pick the appropriate binary.

The Trunk version is currently hard-coded into the workflow. However, the same was true when we were using a custom image: a particular version of Trunk would be baked into the image whenever it got built. If we ever go back to a custom image, I think we should choose the Trunk version explicitly in the Dockerfile.

Before you upgrade to a t3a.medium instance, let's try something completely different. The latest commit (4251242953) runs CI on a stock Docker image, getting rid of the `setup-image` step. On every workflow run, the `build` step installs Trunk by hand by downloading a binary from the [release page](https://github.com/trunk-rs/trunk/releases) and unpacking it into Cargo's binaries folder. This version of the workflow runs entirely in Docker. If we like it, we can uninstall Node and re-register the runner with only the default label. Right now, the workflow will only run on `x86_64` machines, because I've hard-coded that choice of binary into the workflow. If we like this approach, we can probably make the `setup-trunk` action find out the host architecture and pick the appropriate binary. The Trunk version is currently hard-coded into the workflow. However, the same was true when we were using a custom image: a particular version of Trunk would be baked into the image whenever it got built. If we ever go back to a custom image, I think we should choose the Trunk version explicitly in the Dockerfile.
Owner

Hmm, the issue seems now to be that image rust:1.85-bookworm does not have node installed, see the details below.

Hmm, the issue seems now to be that image `rust:1.85-bookworm` does not have node installed, see the details below.
Author
Member

Hmm, the issue seems now to be that image rust:1.85-bookworm does not have node installed, see the details below.

Okay, that's what I'd expect. Let's set aside the mystery of why the checkout action in commit 4251242 does succeed on my development machine, even though it's using the same Node-less container image. For now, I'll switch to the cimg/rust:1.85-node image from CircleCI. The company seems reputable, so I'll take the liberty of checking in a commit that runs their third-party container image on our host. Let me know if I should check with you before doing that in the future.

> Hmm, the issue seems now to be that image `rust:1.85-bookworm` does not have node installed, see the details below. Okay, that's what I'd [expect](#issuecomment-2462). Let's set aside the mystery of why the checkout action in commit 4251242 does succeed on my development machine, even though it's using the same Node-less container image. For now, I'll switch to the [`cimg/rust:1.85-node`](https://hub.docker.com/r/cimg/rust) image from [CircleCI](https://en.wikipedia.org/wiki/CircleCI). The company seems reputable, so I'll take the liberty of checking in a commit that runs their third-party container image on our host. Let me know if I should check with you before doing that in the future.
Owner

If you trust an organization and the image is open-souce licensed, feel free to check in a use of it.

I feel like we are heading to having a separate project that maintains and hosts our own favorite image... if you just want to go that way now, also feel free.

If you trust an organization and the image is open-souce licensed, feel free to check in a use of it. I feel like we are heading to having a separate project that maintains and hosts our own favorite image... if you just want to go that way now, also feel free.
Author
Member

If you trust an organization and the image is open-souce licensed, feel free to check in a use of it.

Would it matter if the image weren't open-source licensed? We're just using it as a tool, not shipping it with our product. To me, using it would be akin to developing dyna3 on a proprietary IDE, or hosting its source code on GitHub and using GitHub Actions to test and deploy it. Those are compatible with most open-source licenses, right?

> If you trust an organization and the image is open-souce licensed, feel free to check in a use of it. Would it matter if the image weren't open-source licensed? We're just using it as a tool, not shipping it with our product. To me, using it would be akin to developing dyna3 on a proprietary IDE, or hosting its source code on GitHub and using GitHub Actions to test and deploy it. Those are compatible with most open-source licenses, right?
Author
Member

It turns out that cimg/rust is MIT-licensed, though!

It turns out that `cimg/rust` is [MIT-licensed](https://github.com/CircleCI-Public/cimg-rust/blob/release-v1.85.0/LICENSE), though!
Vectornaut added 1 commit 2025-03-28 05:35:50 +00:00
Switch to CircleCI's Rust + Node container image
All checks were successful
/ build (pull_request) Successful in 2m24s
/ test (pull_request) Successful in 1m8s
/ run-examples (pull_request) Successful in 1m3s
7f45e98b46
This should make the `checkout` work on the runner host.
Owner

Yay, three green checks. Now you need to push some additional commit and see if the jobs re-run on their own, without me logging in and restarting the forgejo-runner service (like I just did).

Yay, three green checks. Now you need to push some additional commit and see if the jobs re-run on their own, without me logging in and restarting the forgejo-runner service (like I just did).
Owner

@Vectornaut wrote in #75 (comment):

It turns out that cimg/rust is MIT-licensed, though!

Great. Then we don't need to have the discussion at the moment. ;-)

@Vectornaut wrote in https://code.studioinfinity.org/StudioInfinity/dyna3/pulls/75#issuecomment-2473: > It turns out that `cimg/rust` is [MIT-licensed](https://github.com/CircleCI-Public/cimg-rust/blob/release-v1.85.0/LICENSE), though! Great. Then we don't need to have the discussion at the moment. ;-)
Author
Member

Now you need to push some additional commit and see if the jobs re-run on their own

I'm going to try to address issue #76 in this pull request, so we should have plenty more commits to test that with.

> Now you need to push some additional commit and see if the jobs re-run on their own I'm going to try to address issue #76 in this pull request, so we should have plenty more commits to test that with.
Vectornaut added 1 commit 2025-03-28 07:37:13 +00:00
Run the examples as tests
All checks were successful
/ build (pull_request) Successful in 1m17s
/ test (pull_request) Successful in 1m8s
0a9d234557
Author
Member

Yes, [the examples] are tests too!

I've now configured the examples to run as tests using Cargo's target selection tools. All I had to do was set test to true and harness to false for each example target.

And at some point we might want to beef up the run-examples test with checking the output, if that's a sensible thing.

This is now issue #77.

Now you need to push some additional commit and see if the jobs re-run on their own

When I pushed commit 0a9d234, the jobs re-ran immediately!

> Yes, [the examples] are tests too! I've now configured the examples to run as tests using Cargo's [target selection](https://doc.rust-lang.org/cargo/commands/cargo-test.html#target-selection) tools. All I had to do was set [`test`](https://doc.rust-lang.org/cargo/reference/cargo-targets.html#the-test-field) to `true` and [`harness`](https://doc.rust-lang.org/cargo/reference/cargo-targets.html#the-harness-field) to `false` for each example target. > And at some point we might want to beef up the run-examples test with checking the output, if that's a sensible thing. This is now issue #77. > Now you need to push some additional commit and see if the jobs re-run on their own When I pushed commit 0a9d234, the jobs re-ran immediately!
Vectornaut added 1 commit 2025-03-28 19:30:22 +00:00
Do the Trunk build check as an end-to-end test
All checks were successful
/ test (pull_request) Successful in 2m18s
992fb76419
This consolidates our whole CI workflow into `cargo test`.
Author
Member

And yes either way please get the "local testing" working. It's OK with me if the local testing doesn't go through forgejo-runner; perhaps the forgejo-runner jobs invoke the local testing script once the container is up and running, for example.

Done! Commit 992fb76 turns the Trunk build check into an end-to-end test, consolidating our whole CI workflow into cargo test. To run CI directly on your development system, rather than in a container, just call cargo test from the app-proto directory and check whether the tests pass with no build warnings.

This pull request should now be ready for review. Once it's mostly stable, I'll update the CI wiki page, since the details have changed substantially since I opened the PR.

> And yes either way please get the "local testing" working. It's OK with me if the local testing doesn't go through forgejo-runner; perhaps the forgejo-runner jobs invoke the local testing script once the container is up and running, for example. Done! Commit 992fb76 turns the Trunk build check into an end-to-end test, consolidating our whole CI workflow into `cargo test`. To run CI directly on your development system, rather than in a container, just call `cargo test` from the `app-proto` directory and check whether the tests pass with no build warnings. This pull request should now be ready for review. Once it's mostly stable, I'll update the [CI wiki page](https://code.studioinfinity.org/StudioInfinity/dyna3/wiki/Continuous-integration), since the details have changed substantially since I opened the PR.
Owner
> git fetch vectornaut
[...reasonable output elided...]
> git checkout forgejo-ci
branch 'forgejo-ci' set up to track 'vectornaut/forgejo-ci'.
Switched to a new branch 'forgejo-ci'
> cd app-proto
> cargo test
[ ... lots of stuff elided, but then:]
  Compiling sycamore-web v0.9.0-beta.3
error[E0658]: lint reasons are experimental
   --> /home/glen/.cargo/registry/src/index.crates.io-6f17d22bba15001f/sycamore-web-0.9.0-beta.3/src/lib.rs:181:21
    |
181 |     #[allow(unused, reason = "Unused in DOM mode.")]
    |                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |
    = note: see issue #54503 <https://github.com/rust-lang/rust/issues/54503> for more information

For more information about this error, try `rustc --explain E0658`.
error: could not compile `sycamore-web` (lib) due to 1 previous error

I realize this is likely due to some sort of version skew somewhere, but (a) we'd like a setup/environment where things are "frozen" such that this kind of skew won't (or at least shouldn't) happen, and (b) I don't really know how to debug this. Thoughts?

Feel free to update the PR in a way that will automagically get rid of this error, or if it's a matter of something like that you've updated rust or cargo and I haven't, I'd love to hear thoughts on how we can make things at least let you know that you need to change/update versions of things, or better yet, give you the commands to do so, or even best, somehow run in the correct version of things (but not of course actually alter the ambient operating system global version of rust, say -- that would be going too far down that road; I was thinking along the lines of something written in Python that arranged and updated its own venv as necessary, so that it was only controlling the version of python used locally for its own purposes).

``` > git fetch vectornaut [...reasonable output elided...] > git checkout forgejo-ci branch 'forgejo-ci' set up to track 'vectornaut/forgejo-ci'. Switched to a new branch 'forgejo-ci' > cd app-proto > cargo test [ ... lots of stuff elided, but then:] Compiling sycamore-web v0.9.0-beta.3 error[E0658]: lint reasons are experimental --> /home/glen/.cargo/registry/src/index.crates.io-6f17d22bba15001f/sycamore-web-0.9.0-beta.3/src/lib.rs:181:21 | 181 | #[allow(unused, reason = "Unused in DOM mode.")] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: see issue #54503 <https://github.com/rust-lang/rust/issues/54503> for more information For more information about this error, try `rustc --explain E0658`. error: could not compile `sycamore-web` (lib) due to 1 previous error ``` I realize this is likely due to some sort of version skew somewhere, but (a) we'd like a setup/environment where things are "frozen" such that this kind of skew won't (or at least shouldn't) happen, and (b) I don't really know how to debug this. Thoughts? Feel free to update the PR in a way that will automagically get rid of this error, or if it's a matter of something like that you've updated rust or cargo and I haven't, I'd love to hear thoughts on how we can make things at least let you know that you need to change/update versions of things, or better yet, give you the commands to do so, or even best, somehow run in the correct version of things (but _not_ of course actually alter the ambient operating system global version of rust, say -- that would be going too far down that road; I was thinking along the lines of something written in Python that arranged and updated its own venv as necessary, so that it was only controlling the version of python used locally for its own purposes).
Author
Member

That's odd!

  • Which test is this happening in?
  • Do you get a similar "lint reasons are experimental" error when you call cargo build or trunk build?
  • What do you get when you call cargo --version and trunk --version?

Update: I predict that your Cargo version is earlier than 1.81, as discussed below.

That's odd! - Which test is this happening in? - Do you get a similar "lint reasons are experimental" error when you call `cargo build` or `trunk build`? - What do you get when you call `cargo --version` and `trunk --version`? ***Update:** I predict that your Cargo version is earlier than 1.81, as discussed [below](#issuecomment-2487).*
Author
Member

I'm now confident that this is a Rust version issue. A helpful Redditor announced that the lint_reasons feature was stabilized in Rust 1.81.0. When I vary the CI container image, I get the same error as you in cimg/rust:1.80-node, but no error in cimg/rust:1.81-node.

I'm now confident that this is a Rust version issue. A helpful Redditor [announced](https://www.reddit.com/r/rust/comments/1dpil5r/feature_lint_reasons_stabilized_in_nightly_1810/) that the `lint_reasons` feature was [stabilized](https://github.com/rust-lang/rust/pull/120924) in Rust 1.81.0. When I vary the CI container image, I get the same error as you in `cimg/rust:1.80-node`, but no error in `cimg/rust:1.81-node`.
Author
Member

I'd love to hear thoughts on how we can make things at least let you know that you need to change/update versions of things

Since the error is coming from Sycamore, it seems like any attempt to enforce a new enough Rust compiler should also come from Sycamore, not from us. This might've been addressed in the official release of Sycamore 0.9.0, or the upgrade to 0.9.1; we're still on 0.9.0-beta.3.

If Sycamore 0.9.1 does check for Rust version compatibility in the way you'd like, we can open an upgrade PR after this one. (I think it's best to do it in that order, because this PR is the one that starts tracking Cargo.lock.) In fact, we should upgrade even if the official release doesn't address this particular issue.

> I'd love to hear thoughts on how we can make things at least let you know that you need to change/update versions of things Since the error is coming from [Sycamore](https://sycamore.dev), it seems like any attempt to enforce a new enough Rust compiler should also come from Sycamore, not from us. This might've been addressed in the official release of Sycamore 0.9.0, or the upgrade to 0.9.1; we're still on 0.9.0-beta.3. If Sycamore 0.9.1 does check for Rust version compatibility in the way you'd like, we can open an upgrade PR after this one. (I think it's best to do it in that order, because this PR is the one that starts tracking `Cargo.lock`.) In fact, we should upgrade even if the official release doesn't address this particular issue.
Owner

@Vectornaut wrote in #75 (comment):

  • Which test is this happening in?

Can't really tell; if I now execute cargo test, the entire output is

   Compiling sycamore-web v0.9.0-beta.3
error[E0658]: lint reasons are experimental
   --> /home/glen/.cargo/registry/src/index.crates.io-6f17d22bba15001f/sycamore-web-0.9.0-beta.3/src/lib.rs:181:21
    |
181 |     #[allow(unused, reason = "Unused in DOM mode.")]
    |                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |
    = note: see issue #54503 <https://github.com/rust-lang/rust/issues/54503> for more information

For more information about this error, try `rustc --explain E0658`.
error: could not compile `sycamore-web` (lib) due to 1 previous error
  • Do you get a similar "lint reasons are experimental" warning when you call cargo build or trunk build?

Indeed cargo build produces identical output.

  • What do you get when you call cargo --version and trunk --version?
> cargo --version
cargo 1.80.0 (376290515 2024-07-16)
> trunk --version
trunk 0.21.4

Your prediction is correct. So what's the response? The cargo I am executing is the ambient OS one, from /usr/bin/cargo. And:

> rpm -qf /usr/bin/cargo
rustup-1.28.1~0-1.1.x86_64
> sudo zypper search -s rustup
Loading repository data...
Reading installed packages...

S  | Name   | Type    | Version      | Arch   | Repository
---+--------+---------+--------------+--------+------------------------
i+ | rustup | package | 1.28.1~0-1.1 | x86_64 | openSUSE:Tumbleweed
i+ | rustup | package | 1.28.1~0-1.1 | x86_64 | openSUSE:Factory
i+ | rustup | package | 1.28.1~0-1.1 | x86_64 | openSUSE-Tumbleweed-Oss

so I have no OS package (vis OpenSUSE Tumbleweed) access to a later version of cargo. But isn't cargo just some utility in the Rust ecosystem, like trunk is? Can't we just install the version of cargo we want in our repo? Can you make a commit to this PR that does so?

Thanks for letting me know your thoughts here.

@Vectornaut wrote in https://code.studioinfinity.org/StudioInfinity/dyna3/pulls/75#issuecomment-2486: > * Which test is this happening in? Can't really tell; if I now execute `cargo test`, the entire output is ``` Compiling sycamore-web v0.9.0-beta.3 error[E0658]: lint reasons are experimental --> /home/glen/.cargo/registry/src/index.crates.io-6f17d22bba15001f/sycamore-web-0.9.0-beta.3/src/lib.rs:181:21 | 181 | #[allow(unused, reason = "Unused in DOM mode.")] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: see issue #54503 <https://github.com/rust-lang/rust/issues/54503> for more information For more information about this error, try `rustc --explain E0658`. error: could not compile `sycamore-web` (lib) due to 1 previous error ``` > * Do you get a similar "lint reasons are experimental" warning when you call `cargo build` or `trunk build`? Indeed `cargo build` produces identical output. > * What do you get when you call `cargo --version` and `trunk --version`? ``` > cargo --version cargo 1.80.0 (376290515 2024-07-16) > trunk --version trunk 0.21.4 ``` Your prediction is correct. So what's the response? The `cargo` I am executing is the ambient OS one, from `/usr/bin/cargo`. And: ``` > rpm -qf /usr/bin/cargo rustup-1.28.1~0-1.1.x86_64 > sudo zypper search -s rustup Loading repository data... Reading installed packages... S | Name | Type | Version | Arch | Repository ---+--------+---------+--------------+--------+------------------------ i+ | rustup | package | 1.28.1~0-1.1 | x86_64 | openSUSE:Tumbleweed i+ | rustup | package | 1.28.1~0-1.1 | x86_64 | openSUSE:Factory i+ | rustup | package | 1.28.1~0-1.1 | x86_64 | openSUSE-Tumbleweed-Oss ``` so I have no OS package (vis OpenSUSE Tumbleweed) access to a later version of cargo. But isn't cargo just some utility in the Rust ecosystem, like trunk is? Can't we just install the version of cargo we want in our repo? Can you make a commit to this PR that does so? Thanks for letting me know your thoughts here.
Author
Member

Your prediction is correct. So what's the response?

Since these messages crossed: see above.

> Your prediction is correct. So what's the response? Since these messages crossed: see [above](#issuecomment-2488).
Owner

@Vectornaut wrote in #75 (comment):

If Sycamore 0.9.1 does check for Rust version compatibility in the way you'd like, we can open an upgrade PR after this one. (I think it's best to do it in that order, because this PR is the one that starts tracking Cargo.lock.) In fact, we should upgrade even if the official release doesn't address this particular issue.

OK, fine, but in the meantime I'm not going to merge this PR if it doesn't run for me. Can you check in a commit downgrading Sycamore so that this will work on my environment? Or conversely, check in a commit that makes this repo run in its own local version of rust? (I gather it's not enough to just update cargo, rust itself needs updating as well.) There is this kind of old rustenv thing but maybe there's a more official way to do that sort of thing? I don't really see why this repo should be tied to the ambient rust, as opposed to freezing a version of rust/cargo that it installs locally and then runs everything from.

For expediency at the moment, either solution (downgrading Sycamore/locally upgrading Rust) is fine by me. Sadly the one thing I can't do is just "Upgrade Rust on my machine." Turns out there's no OpenSUSE Tumbleweed rust package new enough yet, which is a bit surprising since one big reason I switched to Tumbleweed is because at the time (4-5 years ago, I guess), it always had at least as new a version of everything as any other distro. But maybe the Tumbleweed user community has dwindled to where that isn't just kept to be the case anymore...

@Vectornaut wrote in https://code.studioinfinity.org/StudioInfinity/dyna3/pulls/75#issuecomment-2488: > If Sycamore 0.9.1 does check for Rust version compatibility in the way you'd like, we can open an upgrade PR after this one. (I think it's best to do it in that order, because this PR is the one that starts tracking `Cargo.lock`.) In fact, we should upgrade even if the official release doesn't address this particular issue. OK, fine, but in the meantime I'm not going to merge this PR if it doesn't run for me. Can you check in a commit downgrading Sycamore so that this will work on my environment? Or conversely, check in a commit that makes this repo run in its own local version of rust? (I gather it's not enough to just update cargo, rust itself needs updating as well.) There is this kind of old [rustenv thing](https://pypi.org/project/rustenv/) but maybe there's a more official way to do that sort of thing? I don't really see why this repo should be tied to the ambient rust, as opposed to freezing a version of rust/cargo that it installs locally and then runs everything from. For expediency at the moment, either solution (downgrading Sycamore/locally upgrading Rust) is fine by me. Sadly the one thing I can't do is just "Upgrade Rust on my machine." Turns out there's no OpenSUSE Tumbleweed rust package new enough yet, which is a bit surprising since one big reason I switched to Tumbleweed is because at the time (4-5 years ago, I guess), it always had at least as new a version of everything as any other distro. But maybe the Tumbleweed user community has dwindled to where that isn't just kept to be the case anymore...
Author
Member

I have no OS package (vis OpenSUSE Tumbleweed) access to a later version of cargo.

rpm -qf /usr/bin/cargo
rustup-1.28.1~0-1.1.x86_64

My impression is that rustup is a Rust version manager, sort of analogous to NVM. Your Rust version shouldn't be constrained by the rustup package version. (I'm on the same version of rustup, and I have a newer version of Rust.) If you haven't already, could you try calling rustup update?

> I have no OS package (vis OpenSUSE Tumbleweed) access to a later version of cargo. > ``` > rpm -qf /usr/bin/cargo > rustup-1.28.1~0-1.1.x86_64 > ``` My impression is that *rustup* is a Rust version manager, sort of analogous to NVM. Your Rust version shouldn't be constrained by the *rustup* package version. (I'm on the same version of *rustup*, and I have a newer version of Rust.) If you haven't already, could you try calling [`rustup update`](https://rust-lang.github.io/rustup/basics.html)?
Owner

Ah, OK, I did. And now cargo --version reports cargo 1.85.1 (d73d2caf9 2024-12-31) and cargo test succeeds. A couple of questions:

  • Since I executed rustup update as an ordinary user, I presume the update only affected me. Do you know where it's storing the new version of rust?
  • Should we file an issue to make our environment freeze the version of rust? Presuming there's a rustup command that sets the specific version of rust that cargo and rustc will use, one easy way to do that would be to use make as our command runner, and set up a makefile that ensures that the current version of rust some specific version, every time make is run. There may of course be other rust-centric methods for this same thing. Freezing the rust version would avoid this kind of surprise in the future, and seems like good hygeine in general. I am not proposing we implement the freezing of rust in this PR, now that it's running for me, but just that we file an issue to be resolved at the next convenience.

I will continue with the review of this PR, which will likely involve pushing at least some trivial commit just to verify that the CI is working when I push as well.

Ah, OK, I did. And now `cargo --version` reports `cargo 1.85.1 (d73d2caf9 2024-12-31)` and `cargo test` succeeds. A couple of questions: * Since I executed `rustup update` as an ordinary user, I presume the update only affected me. Do you know where it's storing the new version of rust? * Should we file an issue to make our environment freeze the version of rust? Presuming there's a rustup command that sets the specific version of rust that `cargo` and `rustc` will use, one easy way to do that would be to use `make` as our command runner, and set up a makefile that ensures that the current version of rust some specific version, every time `make` is run. There may of course be other rust-centric methods for this same thing. Freezing the rust version would avoid this kind of surprise in the future, and seems like good hygeine in general. I am not proposing we implement the freezing of rust in this PR, now that it's running for me, but just that we file an issue to be resolved at the next convenience. I will continue with the review of this PR, which will likely involve pushing at least some trivial commit just to verify that the CI is working when I push as well.
glen reviewed 2025-03-29 00:29:28 +00:00
glen left a comment
Owner

This is close, just a few questions in review, mostly informational, not clear whether or not any additional changes will be needed.

This is close, just a few questions in review, mostly informational, not clear whether or not any additional changes will be needed.
@ -0,0 +8,4 @@
#
runs:
using: "composite"
steps:
Owner

Reviewing this is hampered because I have no idea what using: "composite" means, and https://forgejo.org/docs/next/user/actions/ sheds no light on this. Can you either explain or point to somewhere that has the information?

Reviewing this is hampered because I have no idea what `using: "composite"` means, and https://forgejo.org/docs/next/user/actions/ sheds no light on this. Can you either explain or point to somewhere that has the information?
Author
Member

The documentation of Gitea Actions and Forgejo Actions seems to rely a lot on the principle that these are "similar and mostly compatible to GitHub Actions," despite Forgejo's insistence that "they are not and will never be identical." GitHub's description of composite actions is pretty much what I'd guess from the usage here:

A composite action allows you to combine multiple workflow steps within one action. For example, you can use this feature to bundle together multiple run commands into an action, and then have a workflow that executes the bundled commands as a single step using that action. To see an example, check out Creating a composite action.

The documentation of Gitea Actions and Forgejo Actions seems to rely a lot on the [principle](https://docs.gitea.com/next/usage/actions/overview#name) that these are "similar and mostly compatible to GitHub Actions," despite Forgejo's insistence that "they are not and will never be identical." GitHub's [description](https://docs.github.com/en/actions/sharing-automations/creating-actions/about-custom-actions#composite-actions) of composite actions is pretty much what I'd guess from the usage [here](https://forgejo.org/docs/next/user/actions/#inputs): > A composite action allows you to combine multiple workflow steps within one action. For example, you can use this feature to bundle together multiple run commands into an action, and then have a workflow that executes the bundled commands as a single step using that action. To see an example, check out [Creating a composite action](https://docs.github.com/en/actions/creating-actions/creating-a-composite-action).
glen marked this conversation as resolved
@ -0,0 +18,4 @@
steps:
- uses: https://code.forgejo.org/actions/checkout@v4
- uses: ./.forgejo/setup-trunk
- run: RUSTFLAGS='-D warnings' cargo test
Owner

Why isn't this path ../.forgejo/setup-trunk? That would seem to be the relative path from the working-directory to the action. I mean, I am not suggesting the file is incorrect; I am just saying the notation is unclear, and my particular concern is that when we (presumably sometime soon) remove the app-proto layer of the hierarchy, that this workflow file be reasonably resilient to that.

Why isn't this path `../.forgejo/setup-trunk`? That would seem to be the relative path from the working-directory to the action. I mean, I am not suggesting the file is incorrect; I am just saying the notation is unclear, and my particular concern is that when we (presumably sometime soon) remove the app-proto layer of the hierarchy, that this workflow file be reasonably resilient to that.
Author
Member

I think the label defaults.run.working-directory sets the default only for run steps, like in GitHub Actions.

I think the label `defaults.run.working-directory` sets the default only for `run` steps, [like in GitHub Actions](https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions#jobsjob_iddefaultsrun).
Owner

How is the working directory for a uses step determined then? Should that be made explicit in this file somehow?

How is the working directory for a uses step determined then? Should that be made explicit in this file somehow?
@ -53,0 +54,4 @@
[[example]]
name = "irisawa-hexlet"
test = true
harness = false
Owner

OK, so if I understand this is the setting that makes cargo test also run e.g. the irisawa-hexlet example. Given that:

(A) Is the run-examples shell script now redundant, and should it be removed in this PR?

(B) Given that the parameters for all four examples are exactly the same, is there any way for this manifest file to just say that test should be true and harness false for everything in the examples directory? That seems likely to remain the case for the foreseeable future, and would have the advantage that we could add another example test by just creating a file in the examples directory.

OK, so if I understand this is the setting that makes `cargo test` also run e.g. the `irisawa-hexlet` example. Given that: (A) Is the run-examples shell script now redundant, and should it be removed in this PR? (B) Given that the parameters for all four examples are exactly the same, is there any way for this manifest file to just say that test should be true and harness false for everything in the examples directory? That seems likely to remain the case for the foreseeable future, and would have the advantage that we could add another example test by just creating a file in the examples directory.
Author
Member

(A) Right now, my main use case for run-examples is to get the printed output of the examples in a known order with no extra output mixed in. When we address #77, I plan to turn this into a script for updating the recorded output that we're testing against. Since cargo test doesn't really seem designed for this, I'd recommend keeping run-examples for now.

(B) When we address #77, we're probably going to bring the example test runs back into the test harness, where they'll be called in a different way. Thus, streamlining the current call methods may not be useful in the long term.

(A) Right now, my main use case for `run-examples` is to get the printed output of the examples in a known order with no extra output mixed in. When we address #77, I plan to turn this into a script for updating the recorded output that we're testing against. Since `cargo test` doesn't really seem designed for this, I'd recommend keeping `run-examples` for now. (B) When we address #77, we're probably going to bring the example test runs back into the test harness, where they'll be called in a different way. Thus, streamlining the current call methods may not be useful in the long term.
glen marked this conversation as resolved
Owner

P.S. If there is some small change inspired by any of the comments, see if you can give me the information needed for me to do it myself, vis-a-vis testing that my pushes trigger the CI run. Otherwise, I'll have to make something up ;-)

P.S. If there is some small change inspired by any of the comments, see if you can give me the information needed for me to do it myself, vis-a-vis testing that my pushes trigger the CI run. Otherwise, I'll have to make something up ;-)
Author
Member

P.S. If there is some small change inspired by any of the comments, see if you can give me the information needed for me to do it myself, vis-a-vis testing that my pushes trigger the CI run. Otherwise, I'll have to make something up ;-)

One thing you could try is:

  1. Make a commit that causes the branch to fail CI. For example, you could:
    • Call compile_error!("none shall pass"); somewhere in main.rs.
    • Call panic!("at the disco"); inside a test function.
  2. Check whether you're allowed to merge.
  3. If necessary, we figure out how to only allow branches that pass CI to be merged.
  4. Revert the commit that causes the branch to fail CI.
> P.S. If there is some small change inspired by any of the comments, see if you can give me the information needed for me to do it myself, vis-a-vis testing that my pushes trigger the CI run. Otherwise, I'll have to make something up ;-) One thing you could try is: 1. Make a commit that causes the branch to fail CI. For example, you could: - Call `compile_error!("none shall pass");` somewhere in `main.rs`. - Call `panic!("at the disco");` inside a test function. 2. Check whether you're allowed to merge. 3. If necessary, we figure out how to only allow branches that pass CI to be merged. 4. Revert the commit that causes the branch to fail CI.
Author
Member

By the way, it turns out that Sycamore doesn't currently enforce a minimum Rust version at compile time, but the maintainer has okayed us to make a pull request! I'll try to fit that in next week.

By the way, it turns out that Sycamore doesn't currently enforce a minimum Rust version at compile time, but the maintainer has [okayed](https://github.com/sycamore-rs/sycamore/issues/191#issuecomment-2762817508) us to make a pull request! I'll try to fit that in next week.
glen reviewed 2025-03-29 15:05:17 +00:00
@ -0,0 +11,4 @@
steps:
- run: rustup target add wasm32-unknown-unknown
- run: curl --output - --proto '=https' --tlsv1.2 --retry 10 --retry-connrefused --location --silent --show-error --fail 'https://github.com/trunk-rs/trunk/releases/download/v0.21.12/trunk-x86_64-unknown-linux-gnu.tar.gz' | tar --gunzip --extract --file -
working-directory: /home/circleci/.cargo/bin
Owner

I don't love that we are apparently using some layout detail of the docker image we are currently using (that it has a /home/circleci directory that is somehow relevant). Can we put this stuff in a controlled location, like in our tree, and then invoke cargo and or trunk later in a way that explicitly points to where we put it? I would prefer that if possible. Again, happy for pointers on how that might be done and then I could try to do.

I don't love that we are apparently using some layout detail of the docker image we are currently using (that it has a /home/circleci directory that is somehow relevant). Can we put this stuff in a controlled location, like in our tree, and then invoke cargo and or trunk later in a way that explicitly points to where we put it? I would prefer that if possible. Again, happy for pointers on how that might be done and then I could try to do.
Author
Member

Good point: it would be nice to make the setup-trunk action more container-independent. I think I've done this in commit df2d458. There may be a cleaner way to do it, but this is the best I could come up with after about an hour of work.

Good point: it would be nice to make the `setup-trunk` action more container-independent. I think I've done this in commit df2d458. There may be a cleaner way to do it, but this is the best I could come up with after about an hour of work.
Owner

Interesting. Does cargo not let you specify a place for it to put its stuff? That would be preferable if so.

Interesting. Does cargo not let you _specify_ a place for it to put its stuff? That would be preferable if so.
Owner

In fact, there seems to be a CARGO_HOME environment variable. I'd rather set that to someplace in our directory tree to keep this self-contained. If you concur, let me make the change so I can make sure the CI running is working for me. But I won't make any changes until we reach consensus.

In fact, there seems to be a CARGO_HOME environment variable. I'd rather set that to someplace in our directory tree to keep this self-contained. If you concur, let me make the change so I can make sure the CI running is working for me. But I won't make any changes until we reach consensus.
Owner

Resolved all but one comment, added one new one.

Resolved all but one comment, added one new one.
Vectornaut added 1 commit 2025-03-30 20:05:35 +00:00
Find the home directory instead of hard-coding it
All checks were successful
/ test (pull_request) Successful in 2m19s
df2d45816c
This should make the `setup-trunk` workflow more container-independent.
All checks were successful
/ test (pull_request) Successful in 2m19s
This pull request can be merged automatically.
You are not authorized to merge this pull request.
View command line instructions

Checkout

From your project repository, check out a new branch and test the changes.
git fetch -u forgejo-ci:Vectornaut-forgejo-ci
git checkout Vectornaut-forgejo-ci

Merge

Merge the changes and update on Forgejo.

Warning: The "Autodetect manual merge" setting is not enabled for this repository, you will have to mark this pull request as manually merged afterwards.

git checkout main
git merge --no-ff Vectornaut-forgejo-ci
git checkout Vectornaut-forgejo-ci
git rebase main
git checkout main
git merge --ff-only Vectornaut-forgejo-ci
git checkout Vectornaut-forgejo-ci
git rebase main
git checkout main
git merge --no-ff Vectornaut-forgejo-ci
git checkout main
git merge --squash Vectornaut-forgejo-ci
git checkout main
git merge --ff-only Vectornaut-forgejo-ci
git checkout main
git merge Vectornaut-forgejo-ci
git push origin main
Sign in to join this conversation.
No description provided.