i’ld like this summary the most:
Musk now says it’s ‘pointless’ to build a Tesla
😁
i’ld like this summary the most:
Musk now says it’s ‘pointless’ to build a Tesla
😁
but maybe only for emails from outside, not for emails from within protonmail? haven’t read any specs of protonmail yet…
well for e2ee you obviously have to let one e encrypt the data for the other e. (good luck with newsletters then) for usual services kindly asking them to support either s/mime or gpg for outgoing emails, that would at least make them know the wish, but good luck there too.
i think the already mentioned solution with encrypting incoming messages on your side just before mda to your inbox should be the closest possible to what op wants. one would need to check if the message is already encrypted and skip encryption for those.
if you only want the admin of that email (imap) server to not be able to read all emails, maybe placing a separate encrypting server (smtp+encrypt+forward) inbetween outside world and your email imap server could be a solution.
one should have a look into the logfiles too as some mailers might log message subjects and of course sender/recipients along with ip adresses of incoming/outgoing servers which the op might not want to be readable as well (i dont know protonmail that much)
also gpg IMHO allows for sign-then-encrypt hiding the signature within the encrypted data which could be wanted. also one might want to look exactly what parts of the messages contents and its headers are encrypted or plaintext on the server before feeling safe from the threat one wants to be protected from.
but then the admin can still read the mail while it arrives ;-)
you’re welcome.
what i’ld suggest… a general rule that i like to always follow is to use a test system for everything new. but that does not need to be a full separate system every time.
lets say you have your mailbox and want to try getting new mails from it using fetchmail. first you can use uidl mechanisms to only fefch every mail once and besides that leave them all on the server, but i like it a bit more secure: create a second email adress/account at your mail providers service only for testing. thus you can do whatever you like to to test the mechanisms only without even touching your real inbox (maybe even fill it up with large emails and look how the system reacts, i once had an email account with a cheap provider that deadlocked the inboxes when full…). then when everything is as you want it, switch the account and password (or create another config file for fetchmail) and your’re done. every change (not only fetchmail things) could go tested this way before going live with the changes. filtering could be done with procmail for example, but when the mda that is called by procmail somehow exits with success when the email really isn’t delivered, then the email might get lost forever depending on the settings of course. so fiddling with new stuff always carries the risk of not fiddling correctly ;-)
have fun !
Its possible to tell your mta (like postfix) to use another mta for all mails, or only some domains etc, so using a third party to play the internet facing service then getting the mails by fetchmail, storing them in a dovecot server is easy. on the sending part you could use your standard email client (i.e. thunderbird on pc or k9-mail on smartphone) to send it to your postfix instance that also sits on the server hosting your dovecot service. the mta there takes the mail and delivers it by rules which could just be using the mta of your freemailer using username/password of your account for all outgoing emails. i am doing this but the “external” mail system are my servers as well, i just don’t want emails to stay too long on VMs in the datacenter where i have no access to the physical disks in case something goes wrong.
a raspberry pi is sufficient for such a aetup (i am using a pi4 currently but for emails only i’ld say a 3 or older would do too), adding a disk via usb makes storage huge and cheap then, i use two usb ssd’s in a raid1 for storage… that server could be only accessible through vpn if you whish, depending on your skills and needs (i mainly use ssl client certificates that are supported by k9mail and thunderbird so it fits seamless to be connected through a haproxy that authenticates these before proxying the plain connection to the pi) clients like thunderbird can offline-store all emails (configure download-or-not per imap folder) making searches easy and quick while my k9 client can search locally or on the server if needed.
maybe adjust maximum mail size of your own mta to exactly match (or slightly less) that of the freemailer you use to prevent surprises of big but later then unsent emails.
its possible to have a nextcloud instance on that same pi that acts as an email web mailer just in case of (i really dont need it, but i’ve set this up anyway). nextcloud is also great for syncing/backup files pictures, contacts notes todo lists and calendar of your phone (where i use davx5 opentasks and foldersync for). there are other webmailers available but installing /using nextcloud is not a too bad idea either ;-)
i suggest also setting up some automatic offsite backup with snapshots of that pi then to cover emails and the setup and its configs ;-)
looks like there are missing some trees at the stairway to complete the picture…
one example of a program that did multiple things is sfdisk, it used to make the kernel reload the new partition table but that was not its main job, only changing them. the extra functionality moved to blockdev which is nearer to doing such as it also triggers flushing buffers and i think setting read/write status. i am fully ok with that change as it removes code from a program that doesn’t need it to another that already does similar things so that other partitioning programs like gdisk fdisk or parted could go the same way so that maintainers of the reread-partition-table things can concentrate on one solution at one place (in userspace) instead of opening issues at an unknown number of projects that also alter partitioning. the “do one thing” paradigma is good for developers who maintain the code and i pretty much appreciate their work. if you are up to only want one-day-flies that either die or take huge amounts of resources only for keeping them alive (image of a mayfly in an emergency room and a heart-lung machine attached while chirurgs rushing around trying to enlenghten its life a few seconds more) then you are good with monolithic tools that could hardly be maintained and suck allday as no one wants to fix any bugs or cannot without creating new ones due to the tightened dependency hell it has internally.
the point is not a lack of examples doing wrong but where one wants to be heading towards.
Lol what???
wouldn’t that be the definition of stable?
the computer on voyager 2 is running for 47 years now, they might have rebooted some parts meanwhile but overall its a long time now, and if the program is free of bugs the time that program can run only depends on the durability of the hardware, protection from cosmic rays (which were afaik the problems the voyager probes faced mostly, not bugs) which could be quite long if protected from hazardous environments and maybe using optoelectronics but the point is that a bug free software can run forever only depending on hardware durability and energy supply, in any other way no humans are needed for a veery long time ;-)
However, systemd makes the system much more secure and reliable as it is
less secure and less reliable day-by-day you meant? systemd introduces needless dependencies ever since as if that was it sole intention ever from its very beginning, which already were used for wide attacks, and exactly those attacks that the people working hard to remove unneeded dependencies for security reasons meant to prevent by things like “do one thing only” (but security was not the number 1 reason for this one i think), systemd instead: ‘lets add another level of that exponential dependency tree from the insecurity hell’ felt like they did this stupid thing intentionally every month for a decade or more.
and stability… if you don’t monitor what systemd does, you’ll never know how bad it actually is. i’ve made custom scripts to monitor systemd’s failures (failing in doing a very primitive of its job) and there are hundreds (actually varying around 200 to 300 sometimes more) of such per day on all our systems for one particular(!) measurement only that was breaking service stability and i wrote a measure-and-fix+monitor workaround. other fixes were not monitored however, only silently fixed by workarounds, thus just unnumbered systemd bugs/instabilities in the dark that stole a lot of work capacity…
if you run distros with systemd, unreliability is your daily experience unless you don’t really care or have never experienced stability before - like running a service (a single process) for 8 years without any interruption then it suddenly stops and you go like “was it maybe an attack? the process died, how could that be? were there any connects from outside at that moment?” not talking about not updating something that long, but “stability” itself CAN be like if you dont stop it, it’ll still run in 10000+ years maybe millions, more likely that humans extincted themselves way earlier than of a process “just dying” by a bug… while systemd even randomly stops things that were running well for no reason (varying) once a month more or less (also varying in what it actually randomly stops, sometimes (2 times) it even stopped ssh on my servers, me asking myself if i should create yet another workaround for systemds buggyness to not locking me out again from network or ratjer go for the real solution for most* of all systemd problems - *see below) on the few standard installs i personally have as i didn’t have the way to automatically replace provider installed distro on VMs in the DC. i want this replacing automatically for the same reason why i don’t like systemd, it causes manual work for a thing that should go automated. however due to systemd’s perpetuated instability i now managed to have this way, and every second working on getting rid of systemd is worth it 100k times. this however does not solve all systemd-introduced problems as the xz attack showed (a systemd-dependency on xz made the infected xz library beeing useful-for-the-atracker during compiletime of sshd binary with which then the attacker could infect the newly built sshd binary),one could still be attacked through systemd’s dependency hell even if one does not use systemd by oneself, but the build machines used for your distro could be affected/infected by systemd’s needless dependencies when “also” compiling for systemd-affected distributions thus there is the risk of becoming a victim of needless-systemd-dependencies while not using systemd at all. however the attack through systemd dependency (and that the public solution was not the removal of needless dependencies only included as source for superflous third party “needs”) made clear that systemd is an overall problem for security that will not be solved quickly but stay just like all windows insecurities will stay as long as they whish to push them to their “users”.
systemd reducing overall security and its unreliability combined with some builtin impediments (i.e. when debugging its defects) is what drove me away from systemd. there are solutions way more stable and way more secure (and way better documented btw) that do not call in for needless dependencies, reducing risks, attack vectors and increases overall debuggability i.e. by deterministic behaviour as an easy example. and none of its important (to me) promises have been fulfilled yet by systemd, drop-in-replacement? have heared that lie thousands of times, but in the last decade i have not experienced it a single time in a distro and it does not seem to be included/finished any more.
for windows users or windows admins a linux with systemd on it IS an improvement in stability, security and of course for updating, yes. but all of that does not come from systemd, rather the opposite is the case, systemd reduces it month by month, thats my experience and thats the most important experience for me, idc what lies whitdepapers tell or what broken promises are believed by anyone or the masses, i want secure and stable servers and services and systemd does not fit in for any of these goals and the time it was still “young” and early problems could be accepted in the hope they get fixed soon are gone, but without those fixes having ever appeared.
maybe try to find a linux user group near where you live. if there is one, usually you get help there, but its usually kinda different sort of help, you don’t get “the solution” to get your personal whishes come true ready prepared in bite-sized piezes for easy consumption but just the help by advices or suggestions that those there can give you or directly would try out.
open source is about sharing knowledge and todays mainstream OS distributions are way more complicated than long ago so the learning curve to adjust things in ways the distribution didn’t prepare (which is often a lot) might be high but always worth a try at least for the learning.
for a lightweight desktop environment that is somehow similar to the old windows98, i’ld say give XFCE a try. i think on debian/ubuntu trying out could be as easy as installing the xfce (or xfce4?) package (or maybe an xfce4-desktop-environment paclage) i don’t remember the exact package name but there is one meta package that depends on all needed stuff, i did it like 4 years ago… when installed you could try it by logging in and (your distro should have a login manager that allows this, or you’ld have to change that too) choosing xfce as desktop environment at login time, thus if you don’t like it, logout again and login with the other again.
i am using xfce because it is clean, lightweight, it does its job, does not invent new unneeded features every few month (like it felt when i used kde long ago) and is adjustable enough for me. i removed the lower task bar and put the open windows components into the bar above adjustedbthat a bit, thats basically what i changed and i think it is quite similar to what win98 was (but thats not the reason for me to have it that way)
also, it is possible to change the window manager (that handles how windows are placed), the desktop manager (like task bar, application menu, maybe widges, logout buttons) and of course also one could change x.org to wayland and back without changing the other components. the login window could come from gnome project but after login one could use a complete different projects toolset.
“can” does not mean that every distro makes that an easy task. also mixing things will likely end in a fuller disk for lots of “needed” components that are maybe mostly unused. (i think i once used gnome but installed kde only for their printing dialog *lol)
when using the big distributions it is likely that no 3rd party downloads are needed to try other window managers or desktop environments, maybe search for such keywords in aptitude , apt search, or such. but new fancy stuff also often first comes from unknown 3rd party websites (or git*.com which is the same security risk as 3rd party websites) before it gets into main repositories after years (or maybe even never)
Closest thing I found was TwisterOS. […] and the fan in my case stops working. Aye-yi-yi!
maybe “TwisterOS” tries to invent air movement by software? it might be a random unrelated incident and the fan is simply broken, it might also be that it enabled some fan control and the fan would start if you only heat up the system enough which might not happen with a lightweight distro and the maybe not cpu consuming programs you use (?). “stress” is a program that could artificially create such cpu consumption for testing (but with a broken fan it might be not a good idea to actively and unnecesarily heat up the cpu, but also cpus usually have failsafe shutdown mechanisms so they dont overheat but that might be like a sudden power down so maybe expect unsaved work to just vanish) another test could be to just give the fan another power source and see what happens, and put abother fan that works in place to see if that changes something
You can use
also there is:
./configure && make && make install
just to mention ;-)
First contact was on the here-named eta-carinae system, we did a holiday tour there long ago and heared about earth from a scientist that rescued a human instead of just studying and thus could not leave him there with his memories about him. the human was talking about star trek, its similarities and real differences all the time. he already spoke fluently in standard Sjesh/sound w/o any interfaces so we listened directly to his true mind. he even had a very worn out tng tshirt in his personal memory items box. i mean he really had used his memory items before! that made us curious and the rest is history. However he is now back here, as we managed to arrange his behavioral training to hide his experiences well, he passed all the tests and got his transport back, but with his biologic cells clock reset to his 20th to compensate the decades he lost out there a little bit. it is possible he could become an ambassador for earth one day, but it looks unlikely that he would want that given the circumstances here, a task he always compares with the mytholigical boulder of Sisyphus (that really never existed physically) whenever he is asked about his opportunity.
just kidding, first contact with TNG was in school, other kids talked about the first episode. i could not watch it at home and also had other problems to fix at that time so i missed a lot of the start of it :-/
however i am trying to train myself for writing in general as i have ideas for a longer story (but not within the trek universe) and as the above text came to my mind i just wrote it and hope you don’t find it too misplaced here or badly written… however any feedback is welcome.
i plan to get a similar setup (music on homeserver, synced to phone for offline use) but i dont need to sync playlists as i rarely use them, i have a streaming account with one(!) playlist with all the songs i remembered and wanted to listen to but didn’t buy as CD back then and use the radio like streaming options a lot.
but for syncing phone with nextcloud i use FolderSync (Pro) and it works as it should. it has lots of possible sync targets and lots of options to sync one or both ways. i have folders with >8000 files that take some time to sync but it works fine in the background with no prob, i let it sync over mobile network too, cz i value a more reliable in-sync status more than bandwidth. however i didn’t really try “immediate sync” for new/changed files yet as i don’t see the need for this but its one of many options.
however i only use nextcloud sync in one or two-way syncs and once used sftp for a one-way sync, so i cannot judge all the other options, but if your playlists are organized in files, their two-way sync might be as easy as with the songs. i bought the pro version on their website so my license is not bound to a google account.
maybe there was a mixup of individual datapoints and individual persons.
lets see if that could fit.
as far as i read things in this thread, the whole security is based on exactly these datapoints: Full Name, Date of Birth and SSN (three datapoints) plus username and password for 3 sites (six datapoints) makes 3+6= 9 datapoints per person.
2.9 billion (us) should be 2.900.000.000 (correct me if i’m wrong, but where i live one “billion” is actually “1.000.000.000.000” thus a “bit” more)
divided by 9 those 2.9billion would be ~ 320 million.
on wikipedia they say the us had 331 million people in 2020…
that would fit like an ass on a bucket! lol just to mention that.
have a nice day!
we need an adblockers blockers blocker
no, what is needed is an app that helps track who benefits from thr apps that annly you most:
the point is NOT buying because of advertising AND let them know it, so they can learn to improve themselves.
they wanted your data? let them have it the way you want them to.
same with any platform. ask the creator of your choice to also publish using patreon and you’ll become a member then, getting the content free of ads. better more directly pay who does the actual work, not all the big tech harvesting all the benefit inbetween.
so what maybe is needed here could be a free or even self-hostable platform that also allows payed subscriptions.
really, yt stopped to play sound on the website for me (beeing logged in), there is a banner to “activate sound” but it always disappears unclickable fast, so i searched and found webtube, an app that basically loads their website, but has one feature youtube has not: “sound” *lol
now i wonder how many of these apps really are “third” party apps and not really theirs only masked as third party for getting that gain of trust all the “others” get when it comes to big techs with their very own “public” crime records …
would be too easy for them to create some small apps, act as if those were 3rd party software but harvest that spyoil (of the 21 century) anyway.
unfuckingbelievable
if you want that without the “un” prefix, then youtube is maybe just the wrong platform for you.
and i strongly believe that youtube “wants” to force more usage of peertube, vimeo and others by enshitting yt every day a bit more. and they really work hard for that.
yeah! you beat me to it 👍
guess this instance is lost, lets restart the matrix!
KNC
Kentucky-Nonfrightened-Chicken