IIRC, GitHub.com and GitHub Enterprise support using SSH for signing. I think that whatever is used should leverage asymmetric/public-key cryptography.
Passkeys maybe?
IIRC, GitHub.com and GitHub Enterprise support using SSH for signing. I think that whatever is used should leverage asymmetric/public-key cryptography.
Passkeys maybe?
Based on the way you wrote your questions, I sense that your situation is completely different from mine. But we work hard to eliminate silos, eliminate fence-tossing, and partner together with experts to ensure that what we ship is of a high quality so that we don’t get paged in the middle of the night. The better we do our daytime jobs, the more we can sleep in the nighttime.
I think that instead of “forcing tests”, you should instead focus on “proving quality.” You think that works the way you thought? Cool. How do you know? What if they were to use 128 NUL bytes? Would it still do the right thing? Cool. How do you know?
Ensuring quality is a larger concept than simply writing tests, but writing tests is definitely part of it. I think if you aim higher and teach the provability of quality, then the better engineers will self-select by starting to write tests.
“If you want to build a ship, don’t drum up people to collect wood and don’t assign them tasks and work, but rather teach them to long for the endless immensity of the sea.” — Antoine de Saint-Exupery
Additionally, if you’re one person against the world, you’re going to have a tough time. Build alliances. Partner with people who will reinforce the message. If you are the only one telling them something they don’t like, they will shun you for it. But if you partner with allies who all have the same message, people are more likely to start to listen. It starts to become a community.
And if all else fails, prove the value of tests by going first. You can’t force anyone to do anything. But you can start doing this yourself. At some point, if code gets called into question, you can look at the tests together to see what’s covered and how that thing is supposed to work. It’s all part of letting the robots do what the robots are good at, which frees you up to do the things that you’re good at.
I think it depends. If you have to refactor in order to test, you probably built it poorly the first time around.
Right. “Move fast” means that it’s going to get progressively worse, and 2 years from now it will all collapse under the weight of its bugs.
Think of tech debt as cancer, and tests as chemotherapy. It might suck for a while, but it can also make you much better.
Sounds like a bunch of junior engineers with senior job titles.
“Senior” is the new mid-level.
Whatever. Comments are helpful, which makes pure JSON a poor choice. JSON5 or JSON-C are better, but linting and static analysis are important to every form of code, so make sure that what you use for that will understand your syntax.
My current preference is generally TOML, but I’ve started dabbling with custom HCL2 DSLs. (I write a lot of Go and Terraform.)
I build software that is used by nearly all engineers in our company. We own hundreds of web applications and websites. We’ve grown by acquisition of smaller companies, and we have an extremely heterogenous environment.
25 years ago, I started my career as a web designer. Today, I’m a Principal Cloud and Platform Engineer. Still to this day, I regularly leverage lessons when building tech that I learned from the world of UX.
“Design is not how it looks. Design is how it works.” — Steve Jobs
Naming consistency helps to reduce the mental friction that people have when learning how something works. For example, one my projects is a suite of Terraform modules that are designed as building blocks which cover all of the fundamental pieces of any app’s stack. We have designed these 20-ish modules to work well standalone, as well as when used together. Certain patterns are the same across the board.
(1) We strongly favor dependency injection, and limit the use of ternary statements. In the world of Terraform, this is via variables or a .tfvars file. Everyone knows that this is how it works, so it reduces the mental friction when adopting a new/additional module.
(2) Variable names which do the same thing are named identically across all modules. Their descriptions are identical. For example, tags = [k:v]
works exactly the same way across all modules, and people don’t have to think about it.
(3) Modules have a naming pattern. Among other things, they begin with the name of the service that the module talks to. (If we find that we’re talking to multiple services, we need to break the module down into smaller chunks.) So aws-
or newrelic-
or datadog-
or github-
or pagerduty-
are all examples.
This overall “design” has not only helped reduce mental friction and made the modules easier to understand and use, but it also makes them easier to manage across hundreds of repositories supporting hundreds of apps. Collaboration, cooperation, and communication are all improved as a result. And if something is difficult to understand, then it means that we screwed up. We need to do a better job listening to the app-engineering teams and SREs who support them to streamline and clarify as much as possible.
“Customers” come in all sorts of shapes and forms.
I have mixed feelings. I spent 4 years working at AWS, was an original member of the SDK/CLI team, and I regularly work across the spectrum of AWS services on a daily basis. Is it worth the effort to get a (virtual) piece of paper with my name on it? Maybe, maybe not.
But if you lack the experience/reputation and are trying to build it, I think it could be useful. But it only matters if you’re actually doing the things that you get certified on. Because learning for real is more important than a (virtual) piece of paper with your name on it.