https://hub.docker.com/r/fredblgr/ubuntu-novnc/tags
https://hub.docker.com/r/dorowu/ubuntu-desktop-lxde-vnc
https://hub.docker.com/_/httpd/tags
Dialogue between me, life, tech and society
https://hub.docker.com/r/fredblgr/ubuntu-novnc/tags
https://hub.docker.com/r/dorowu/ubuntu-desktop-lxde-vnc
https://hub.docker.com/_/httpd/tags
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
docker rmi $(docker images -a -q)
Dashboard "404 page not found"
docker run -d --privileged --restart=unless-stopped -p 80:80 -p 443:443Those are occupied common port, we may use some other port (8081:80,9443:433) to bypass
docker exec -ti <container_id> reset-password
change root password
echo root:imafish | chpasswd
==============
sudo iptables -t nat -A PREROUTING -p tcp --dport 53 -j DNAT --to-destination 127.0.0.53:30001
sudo iptables -t nat -A PREROUTING -p udp --dport 53 -j DNAT --to-destination 127.0.0.53:30001
sudo iptables -D -t nat -A PREROUTING -p udp --dport 53 -j DNAT --to-destination 127.0.0.1:30001
sudo iptables -t nat -v -L PREROUTING -n --line-number
sudo iptables -t nat -D PREROUTING
sudo iptables -I INPUT -p tcp -m tcp --dport 53 -j ACCEPT
sudo iptables -I INPUT -p udp -m udp --dport 53 -j ACCEPT
sudo iptables -S --line-number
Free port 53
https://www.linuxuprising.com/2020/07/ubuntu-how-to-free-up-port-53-used-by.html
When A share a folder to B, the file belong to whoever upload the file and not easy to transfer ownership when A and B are in different organizations it also applies to any org account versus personal Google Drive.
For moving big files over 20GB to B, B can right click on the file and make a copy, copy to somewhere else, new file will belong to B. For small files, we can use google Colab with rsync command to sync up folder recursively. Each account has daily 750 GB data transfer amount. It generates files with 0 Byte size when transfer quota used up without stopping the rsync. It confuse me since the size of the folders are not identical but rsync stop copying file since file existed.
Useful rsync commends:
recursive rsync, showing progress and not overwrite existed files
!rsync --ignore-existing -ra --progress '/content/drive/MyDrive//FolderA/'
'/content/drive/MyDrive//FolderB'
Find and delete empty files
!find "/content/drive/MyDrive/Backup_Local/SYR/" -size 0 | xargs rm
Find the size for the folder
!du -sh '/content/drive/MyDrive/Backup/' | sort -n -r
Useful link
https://ourtechroom.com/tech/copy-shared-google-drive-files-folder-to-my-drive/
https://www.tutorialworks.com/kubernetes-pod-communication/
Connecting Applications with Services
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
Rancher one line start up script usually some with this format
docker run -d -v /data/docker/rancher-server/var/lib/rancher/:/var/lib/rancher/ --restart=unless-stopped --name rancher-server -p 80:80 -p 443:443 rancher/rancher:stable
It crashed all the time and working fine if we don't map the volume (docker -v)
/var/lib/docker/containers/<container_id>/<container_id>-json.log
Fixed it by adding the --privileged attribute
docker run --privileged -d -v /data/docker/rancher-server/var/lib/rancher/:/var/lib/rancher/ --restart=unless-stopped --name rancher-server -p 80:80 -p 443:443 rancher/rancher:stable
Deployment guide:
https://blog.51sec.org/2020/07/lightweight-k8s-lab-rancher-22-k3s.html
https://www.youtube.com/watch?v=RY_RarX9TrY
Client A wants to connect server B behind the firewall. We can reverse SSH from B to server C and client A can connect to the open port on server C, traffic will be forwarded to B:3389
plink.exe <user>@<ip or domain> -pw <password> -P 22 -2 -4 -T -N -C -R 0.0.0.0:12345:127.0.0.1:3389
Allow SSH session to allow remote hosts to connect to ports forwarded
sudo nano /etc/ssh/sshd_config
GatewayPorts=clientspecified
Open the port 12345 on the server C
Ref: https://eviatargerzi.medium.com/how-to-access-rdp-over-ssh-tunnel-c0829631ad44