From a quick look at the repo, it is end-to-end testing for web applications.
Also, it seems that their big selling point is a verbose, English like syntax.
From a quick look at the repo, it is end-to-end testing for web applications.
Also, it seems that their big selling point is a verbose, English like syntax.
It is mutually assured destruction. The job seeker AI spams out a resume to every listing and the hiring AI rejects all applicants for not meeting some unknown criteria. In the end, no worker can find a job and no employer can get applicants. Companies go back to only hiring friends and families of existing employees.
Why would you use a library or framework when you can code everything from scratch? It probably depends on how good the VSCode extension is vs how bad the IDE is.
For the languages I have tried (mostly GoLang plus a bit of Terraform/Terragrunt), VSCode plugins can do code highlighting, can highlight syntax and lint errors, can navigate to a methods implementation, the auto-complete seems to pick random words from the code base, and can find the callers for a method. It is good enough for every day use.
IDEs I have used (Eclipse for Java, PyCharm, InteliJ for Kotlin) offer more. They all have starter templates for common file types. The auto-complete is much more syntax aware and can sometimes guess what variables I intend to pass in as arguments. There is refactoring which can correctly find other usages of a variable and can make trivial code rewrites. There are generators for boilerplate methods. They all have a built in graphical debugger and a test runner.
TAOCS has a reputation for being very deep and thorough, not for being a good introductory text. One of my professors said that in his (very long) industrial career, he only met one person who actually read the books beginning to end but everyone looks something up in them once or twice.
That has been my experience. I once needed to find out how to solve a very specific problem (I think it was calculating statistical values on an infinite stream). I found the single copy of TAOCS in the office reference library, read the relevant section, and implemented the suggested algorithm.
Maybe it is just my experience, but in the last decade, employers stopped trying to recruit and retain top developers.
I have been a full time software engineer for more than a decade. In the 2010s, the mindset at tech giants seemed to be that they had to hire the best developers and do everything they could to keep them. The easiest way to do both was to be the best employer around. For example, Google had 20% time, many companies offered paid sabbaticals after so many years, and every office had catering once a week (if not a free cafeteria). That way, employees would be telling all of their friends how great it is to work for you and if they decide to look for other work, they would have to give up their cushy benefits.
Then, a few years before the pandemic, my employer switched to a different health insurance company and got the expected wave of complaints (the price of this drug went up, my doctor is not covered). HR responded with “our benefits package is above industry averages”. That is a refrain I have been hearing since, even after switching employers. The company is not trying to be the best employer that everyone wants to work at, they just want to be above average. They are saying “go ahead and look for another employer, but they are probably going to be just as bad”.
Obviously, this is just my view, so it is very possible that I have just been unlucky with my employers.
C does exactly what you tell it, no more. Why waste cycles setting a variable to a zero state when a correct program will set it to whatever initial state it expects? It is not user friendly, but it is performant.
deleted by creator
Are we really doing fine? 4% linux market share? Windows is a default?
I suspect that the issue hindering adoption is GNU and other user land projects, not the Linux kernel. Plenty of people use devices that pair a Linux kernel with an easy to use UI and popular software (see Android and Chromebook).
Many people would happily switch to a Linux based OS that had the exact same GUI as their current OS and ran the exact same software. That is not a realistic requirement in practice.
It is possible that Linux would have more adoption if they invested more money into having drivers for a wider range of hardware, but having Linux kernel develers write drivers instead of hardware vendors is not a strategy that scales well.
Senior developer tip: squash the evidence.
The early days of the Internet, there was a cottage industry to burn Linux ISOs to CDs and selling them.
I work in Java, Golang, Python, with Helm, CircleCI, bash scripts, Makefiles, Terraform, and Terragrunt for testing and deployment. There are other teams handling the C++ and SQL (plus whatever dark magic QA uses).
I am well aware of learning, but people tend to learn by comprehension and understanding. Completing phrases without understanding the language (or the concept of language) is the realm of LLM and Scrabble players.
About 10 years ago, I read a paper that suggested mitigating a rubber hose attack by priming your sys admins with subconscious biases. I think this may have been it: https://www.usenix.org/system/files/conference/usenixsecurity12/sec12-final25.pdf
Essentially you turn your user to be an LLM for a nonsense language. You train them by having them read nonsense text. You then test them by giving them a sequence of text to complete and record how quickly and accurately they respond. Repeat until the accuracy is at an acceptable level.
Even if an attacker kidnaps the user and sends in a body double, with your user’s id, security key, and means of biometric identification, they will still not succeed. Your user cannot teach their doppelganger the pattern and if the attacker tries to get the user on a video call, the added lag of the user reading the prompt and dictating the response should introduce a detectable amount of lag.
The only remaining avenue the attacker has is, after dumping the body of the original user, kidnap the family of another user and force that user to carry out the attack. The paper does not bother to cover this scenario, since the mitigation is obvious: your user conditioning should include a second module teaching users to value the security of your corporate assets above the lives of their loved ones.
Why should we keep leap seconds? Let noon drift by 1 minute per century (or whatever).
It looks like it targets JavaScript, the language that least needs it. What is the job security advantage of this tool over a minifier?
I am not a hiring manager (or, more likely a recruiter/HR), so I cannot speak about the value of having a MS listed on one’s resume.
I am a senior developer with a masters degree and I am very grateful for the knowledge I got from that degree. Since I graduated, I have never needed to write a compiler, but i know how to implement a bunch of language features and it makes new languages easier to learn.
Could I have learned all of that without going to school? Definitely. It is all in white papers, software documentation, and textbooks, but for me, that is not the best way to learn. From what I have been able to find, even the most advanced MOOCs are only at advanced undergraduate level but don’t cover grad school level concepts.
I always feel a little paranoid when I explicitly close transactions, connections, and files (for quick running scripts, the OS will close the file when my process exits and for long running applications, the garbage collector will close it when the object leaves the scope). Then I read a blog post like this an remember that it is always better to explicitly free resources when I am done with them.
- Encrypt the data at rest
- Encrypt the data in transit
Did you remember to plan for a zero downtime encryption key rotation?
- No shared accounts at any level of access
Did you know when account passwords expire? Have you thought about password rotation?
- Full logging of access and activity.
That sounds like a good practice until you have 20 (or even 2000) backend server requests per end user operation.
All of those are taken from my experience.
Security is like an invasive medical procedure: it is very painful in the short term but prevents dire complications in the long term.
Not at all in my org, as far as I know. We are a team of senior engineers somewhat set in our ways and I am not sure how good Copilot plugin for Emacs is.
We are part of a large company and we had a mandate from up top to come up with ways to incorporate AI into our product. We prototyped a few, but could never get it batter than “almost good enough to be useful”. Other teams have presented promising prototypes of inhouse AI assistants that we can incorporate into products.
My team pivoted to the inverse: seeing if we can make our product more useful to ML developers.
We have all of our build and CI in
make
so, theoretically, all the CI system needs to do is run a single command. Then I try to run the command on a CI server, it is missing an OS package (and their package manager version is a major version behind so I need to download a pre-built binary from the project site). Then the tests get kill for using too much memory. Then, after I reduce resource limits, the tests time out…I am grateful that we use CircleCI as our SaaS CICD and they let me SSH on to a test container so I can see what is going on.