A little insane, but in a good way.
I would be happy to, but all current local models are vastly inferior to GPT-3.5. The unfortunate reality is that if you want to create anything high quality, you must use the OpenAI API.
It isn’t available outside the US and the UK, so I can’t try it yet, but I will as soon as I get access.
@AutoTLDR
Ethan Mollick has two recent articles related to this topic:
@AutoTLDR
deleted by creator
deleted by creator
@AutoTLDR the other bot talks too much, please summarize this
@ChatGPT@lemmings.world could you please compare the advantages and disadvantages of having a general-purpose chatbot available in an online forum that can answer any questions, whether relevant to the conversation or not?
AFAIK it doesn’t. But I haven’t got access to it yet, so this is based on tweets by some people who have.
I’ve just tried the version of GPT-4 available via the API. There has been a lot of talk about GPT-4 being “nerfed” since the May update, and I wanted to see if I could see an obvious difference. I ran the same prompt in both the web-based ChatGPT with GPT-4 selected, and a self-hosted chat UI that uses the API. The API call used a temperature value of 0.7.
I use Nginx as a reverse proxy on my Ubuntu server. There is a web app running on localhost:3456, and I want to expose it to the internet using the hostname chat.botond.online. I also want to protect it using HTTP Basic authentication. I have the Apache tools for managing htpasswd files installed.
Could you please write an Nginx config file for this site, and also explain how I can create a htpasswd file and use it in this nginx config to protect it?
Sure, I can certainly help with that. We will perform these tasks in two steps:
Let’s start with the first one.
Step 1: Create an htpasswd file
You can create a .htpasswd file by using the htpasswd
command that comes with the Apache httpd
server. In your case, you can create a user (let’s say user1
) like this:
sudo htpasswd -c /etc/nginx/.htpasswd user1
This will prompt you to enter and confirm a password for the user user1
. The -c
option tells htpasswd
to create a new file.
Please note: This will overwrite the file if it already exists. If you want to add additional users to an existing .htpasswd file, do not use the -c
flag.
Step 2: Write an Nginx config file
Now, let’s create an Nginx configuration. You can add a new server block in your Nginx config, or create a new config file under /etc/nginx/sites-available/
and symlink it to /etc/nginx/sites-enabled/
. I’ll create a new server block as an example:
server {
listen 80;
server_name something.somewhere.com;
location / {
auth_basic "Restricted Content"; # This is the message that will be shown in the login prompt
auth_basic_user_file /etc/nginx/.htpasswd; # The file you created earlier
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:3456;
}
}
This will set up basic HTTP authentication on your site and also reverse proxy requests to your web app running on localhost:3456
.
Note: If you’re planning to expose this to the internet, I would strongly recommend setting up SSL as well. You can use a free service like Let’s Encrypt to get an SSL certificate. If you don’t secure your site with HTTPS, the HTTP Basic authentication credentials will be sent in plain text, which is a significant security risk.
Sure, let’s start by creating the htpasswd file for HTTP basic authentication. You can create a .htpasswd file using htpasswd
command that comes with the apache2-utils
package. If it’s not installed, you can install it via apt-get install apache2-utils
.
To create a .htpasswd file, run the following command:
htpasswd -c /etc/nginx/.htpasswd username
Replace username
with the username you want. After running this command, you will be prompted to enter and confirm a password for this user. The -c
flag is used to create a new file. Do not use this flag if you already have a .htpasswd file and just want to add a new user.
Now, let’s create a Nginx configuration file. Using your favorite text editor (nano, vim, etc.), open up a new configuration file in /etc/nginx/sites-available/
, for example:
sudo nano /etc/nginx/sites-available/something.somewhere.com
Then, paste the following configuration:
server {
listen 80;
server_name something.somewhere.com;
location / {
auth_basic "Restricted Content";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:3456;
}
}
In this configuration:
listen 80;
means Nginx is listening on port 80 for incoming connections.server_name something.somewhere.com;
specifies the hostname.auth_basic "Restricted Content";
enables HTTP basic authentication with the message “Restricted Content”.auth_basic_user_file /etc/nginx/.htpasswd;
specifies the location of the .htpasswd file.proxy_set_header
directives are used to pass on some client information to the proxied server.proxy_pass http://localhost:3456;
is used to forward requests to your web app running at localhost:3456
.Save and close the file.
To enable this site, create a symbolic link of it in the sites-enabled
directory:
sudo ln -s /etc/nginx/sites-available/something.somewhere.com /etc/nginx/sites-enabled/
Finally, test the configuration and restart Nginx:
sudo nginx -t
sudo systemctl restart nginx
Now your web app should be accessible from the internet at http://something.somewhere.com
, and it should ask for a username and password as you specified in the .htpasswd file.
There is a marked difference between the two responses but as far as I can see, both are correct and high-quality. ChatGPT is less verbose, the API generates lots of detail. I prefer the API’s answer slightly.
I will probably try the same experiment with much more complex problems (if anyone has a suggestion for a good test case, please tell me in a comment), but on this simple problem, they both performed very well.
I’ve been waiting for Code Interpreter for a long time. I’m very excited to finally be able to try it. Some interesting examples here:
TL;DR: (human-generated 👱♂️)
gpt-3.5-turbo-instruct
text-embedding-ada-002
, but don’t worry:We recognize this is a significant change for developers using those older models. Winding down these models is not a decision we are making lightly. We will cover the financial cost of users re-embedding content with these new models. We will be in touch with impacted users over the coming days.
Ok, this is an uncharacteristically bad summary, AutoTLDR. Bad bot!
The biggest aha-moment with Copilot for me was when I wanted to implement tools for my GPT-based personal assistant. The function calling wasn’t yet available in the OpenAI API, and I’ve found that GPT-3.5 was really bad at using tools consistently in a long chat conversation. So I decided to implement a classifier DAG, with either a simple LLM prompt or a regular function in its nodes. Something like this:
what is this? (reminder | todo | other)
reminder -> what kind of reminder? (one-time | recurring)
one-time -> return the ISO timestamp and the reminder text in a JSON object like this
recurring -> return the cron expression and the reminder text in a JSON object like this
todo -> what kind of todo operation (add | delete | ...)
...
other -> just respond normally
I wrote an example of using this classifier graph in code, something like this (it’s missing a lot of important details):
const decisionTree = new Decision(
userIntentClassifier, {
"REMINDER": new Decision(
reminderClassifier, {
"ONE_TIME": new Sequence(
parseNaturalLanguageTime,
createOneTimeReminder,
explainAction
),
"RECURRING": new Sequence(
createRecurringReminder,
explainAction
),
}
),
"TASK": new Decision(
taskClassifier, {
...
}
),
"NONE": answerInChat,
}
);
decisionTree.call(context);
And then I started writing class Decision
, class Sequence
, etc. and it implemented the classes perfectly!
If you are interested in AI safety - whether you agree with the recent emphasis on it or not - I recommend watching at least a couple of videos by Robert Miles:
https://www.youtube.com/@RobertMilesAI
His videos are very enjoyable and interesting, and he presents a compelling argument for taking AI safety seriously.
Unfortunately, I haven’t found such a high-quality source presenting arguments for the opposing view. If anyone knows of one, I encourage them to share it.
Oh finally. Sorry everyone for this train wreck of a thread.
It would summarize the link. Unfortunately that’s an edge case where the bot doesn’t do what you mean.