• 3 Posts
  • 96 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle

  • Oh I could easily be wrong about forgo having integrated ci/cd already. It’s the only tool I mentioned shove that I have never used before. I’m not a good source on this one.

    But I have used both flux and argo quite a lot. I’ll admit that it flux implementation was bad, but it was just a bad experience for everyone using it with me. It was a memory hog and often created. Very few people understood how to use it correctly. When there were errors with e.g. a helm template, you just had to go looking for issues and read through the log. It moved git tags around so you don’t get a history of what flux was doing. I could probably remember more issues if I tried.

    But none of that was a problem with Argo. We just started using it successfully on day 1. Plus its UI is fantastic and a huge advantage. It’s easy to navigate, spot issues, troubleshoot, etc. It also exposes users to resources they unknowingly create because Argo displays owned resources. This part really helped people understand what was going on in k8s. Oh and argo is very extensible. Maybe flux is too but I haven’t tried.


  • They’re both good and quite similar on the surface. But I find that larger, more complicated uses tend to get messy with gitlab because of the heavy use of bash. However, actions are (always?) written in typescript. If your automation needs a lot of logic to handle varying uses, then it’s nice to avoid bash and code with a more language.

    In other words, I’ve seen a few monstrosities that large companies build into gitlab and yikes!








  • That basic idea is roughly how compression works in general. Think zip, tar, etc. files. Identify snippets of highly used byte sequences and create a “map of where each sequence is used. These methods work great on simple types of data like text files where there’s a lot of repetition. Photos have a lot more randomness and tend not to compress as well. At least not so simply.

    You could apply the same methods to multiple image files but I think you’ll run into the same challenge. They won’t compress very well. So you’d have to come up with a more nuanced strategy. It’s a fascinating idea that’s worth exploring. But you’re definitely in the realm of advanced algorithms, file formats, and storage devices.

    That’s apparently my long response for “the other responses are right”



  • Never do this.

    Git is all about tracking changes over time which is meaningless with binary files. They are bloat for your repo, slowing down operations. Depending on the repo, they are likely to change from CI with every commit. That last one means that every commit turns into 2 commits btw. They are can ruin diffs. I could go on for a long time here.

    There are basically 0 upsides. Use an artifact repository instead!


  • A complicated plugin ecosystem (e.g. Jenkins) makes for a terrible use experience. It’s annoying to configure a bunch of config files. Managing dependencies can be a complete nightmare. these problems also complicate your ci/cd.

    So I’ll offer a slightly different answer. I prefer a single file instead of splitting up the config. And I’ll use OpenTelemetry as an excellent example of why. the plugins are compiled right into the app binary. This offers a ton of advantages, including a great reason to merge all of your app configs in a single file.

    This really only works well if you have a good app though.




  • Open source software literally means that the source code is available to anyone. In GitHub, that just means that your repo is public rather than private. But your method technically doesn’t matter. You could publish to a forum if you wish. That’s still open source!

    Free OSS just means that anyone is free to use and modify the source code for any purpose. The details are usually defined in a LICENSE file.

    I feel like you’re really asking about the common practices and methods used in FOSS. Right? If so, that’s entirely up to you as the maintainer. As the project matures, you may attract other contributors which will in turn will motivate change to your tools and methods.

    Start with what works for you. Model after similar projects if you wish. Adjust as change is needed.





  • Test coverage is useful to measure simply because it’s a metric. You can set standards. You can codify the number into ci/cd. You can observe if the number goes up or down over time. You can argue if these things are valuable but quantifying test coverage just makes it simpler or possible to discuss testing. As people discuss test coverage and building tests becomes normalized, the topic becomes boring. You’ll only get thoughtful discussions on automated testing when somebody establishes a new method, pattern, etc. After that, most tests are very simple. That’s often the point.

    Even “testing on autopilot” has high value.

    You can build lots of useful front end tests. There are tools for it. But it’s just not possible to test everything because you can’t codify every requirement. E.g. ensure that this ui element is 5 pixels below some other element, except when the window shrinks, and …

    I haven’t seen great front end tests. But the ones I’ve seen mostly focus on functionality and flow rather than aiming to cover all possible scenarios. Unit tests are different in this regard.

    Integration testing makes sense but I find it hard to do in the time I have.

    This is a red flag. Building tests should be a planned part of your work, usually described as acceptance criteria. If you need 4 hours to write a code change, then plan for 8 or whatever so you can build tests. Engineering leaders should encourage this. If they don’t, I would consider that a cultural problem. One that indicates a lack of focus on quality and all of the problems that follow.

    Edit: I want to soften my “red flag” comment. That’s a red flag for me. That job isn’t necessary bad. But I would personally not be interested. It’s ok to accept things like, “we don’t write tests and sometimes we deal with issues”. Especially if it’s a good job otherwise.


  • Here’s my random collection of thoughts on the subject.

    I have no idea how common it is in general. Seems like some devs build tests while others don’t. This varies plenty on a team level as well as organization wide. I’ve observed this at small to very large companies, though not FAANG where I generally hope and expect that tests are a stronger standard.

    I will say that test are consistently and heavily used in every large, open source project that I’ve reviewed. At some point, I think quality test cases become a requirement.

    Here’s the big thing. Building automated tests is almost always a wise investment, regardless of the size of the org. Manual testing is dramatically more expensive and less effective than running unit and integration tests. I’ve never written unit tests and not found issues.

    More importantly, writing unit tests forces you to write code that can be tested. This is important. IMO, code that can be tested is 1) structured differently and 2) almost always better.

    Unit tests protect you from your own mistakes. Frequently. Integration tests protect you from other people. E.g when your code depends on an api and that api unexpectedly introduces a breaking change.

    Everybody likes having quality tests. Nobody likes writing tests.

    Quality tests are basically a strict requirement for fully automating ci/cd to production. Sure, you can skip tests and automate prior deploys, but I certainly don’t recommend it. I would expect people to be fired for doing this.

    Chasing 100% test coverage is a fools game. Think about your code, what matters, and what doesn’t. Test the parts that add value and skip the rest. This is highly related to how writing unit tests change your code.

    Building front end tests is inherently hard. It’s practically impossible to fully test front end code. Not even close.

    Personally, I like the idea of skipping tests when you’re building a POC. Before the POC is done, you may not know if your solution is viable or what needs to be tested. The POC helps you understand. Builds tests for MVP and further iterations.

    Quality ci/cd tests are complimented by quality observability, which is a large and independent topic.

    / ramblings of a tired mind