Somewhere is an expired certificate, but if you are in a safe space, then simply add this line in your /etc/cups/client.conf
file:
AllowExpiredCerts Yes
Somewhere is an expired certificate, but if you are in a safe space, then simply add this line in your /etc/cups/client.conf
file:
AllowExpiredCerts Yes
Should be :
install.packages("rjags")
But not before installing JAGS, which is not available by default.
I found these packages, still working fine :
http://download.opensuse.org/repositories/home:/cornell_vrdc/Fedora_24/x86_64/
Beyond Rcpp (that is a wonderful tool), if you need to link to existing libraries you can do so !
dyn.load("~/R/x86_64-redhat-linux-gnu-library/3.6/expm/libs/expm.so") balance <- function(A, job = c("B","N", "P","S")) .Call("R_dgebal", A, match.arg(job)) dgebal <- balance
This is old, don’t think I’ll ever need it again, but better visible than hidden forever.
Was not easy to find what I need to compile GCC 5.2.0 on this board, so for those who need it ..
To fix this bug :
In file included from /usr/include/stdio.h:27:0, from ../.././libgcc/../gcc/tsystem.h:87, from ../.././libgcc/libgcc2.c:27: /usr/include/features.h:374:25: fatal error: sys/cdefs.h: No such file or directory compilation terminated.
I had to install one of these (I installed all to avoid any surprises…shame on me…)
apt-get install libc6-dev libc6-dev-arm64-cross libc6-dev-armel libc6-dev-armel-cross: libc6-dev-armhf-cross libnewlib-dev To avoid nasty errors like that :
In file included from ./bconfig.h:3:0, from ../.././gcc/genmddeps.c:18: ./auto-host.h:2188:16: error: declaration does not declare anything [-fpermissive] #define rlim_t long ^
You may want to configure gcc with a minimal number of language :
./configure --enable-languages=c,c++
And for me it way enough.
Liste des fichiers ignorés : git ls-files –others -i –exclude-standard
As usual, this article is for me to remember, if you like it good for you.
Sources:
First make sure you have a certificate ready to go otherwise, create one (something like ssh-keygen).
ls .ssh/ id_rsa id_rsa.pub known_hosts
On the server, just copy past the content of id_rsa.pub, in a file called ~/.ssh/authorized_keys.
cat id_rsa.pub >> ~/.ssh/authorized_keys
The command first:
ssh -L 127.0.0.1:8080:BACKSERVER:8080 -N USERID@FRONTSERVER
– L LOCALIP:LOCALPORT:DISTANTIP:DISTANTPORT
-N : dont start the bash.
The SSH client has a wonderful configuration file that might be empty for most of us, however when correctly configured … it can save you so much time!
So, let see my ~/.ssh/config file.
Host jump_server User my_user_name_on_the_jump_server Hostname DNS_OR_IP_OF_JUMP_SERVER Host internal_server User my_user_name_on_the_internal_server Hostname DNS_OR_IP_OF_INTERNAL_SERVER Port 22 ProxyCommand ssh -q -W %h:%p jump_server
With this configuration, when I do ssh jump_server, I directly ssh to the jump server. If I do ssh internal_server, I directly ssh to the internal server. If all these server have my public ssh key, then no password login, never ever …
A nice thing is that we can do Dynamic Port Forwarding with SOCKS proxy in one command now.
ssh -D 9000 internal_server
This command will open a SOCKS proxy at 9000, directly working with firefox.
This article is mostly inspired from existing blog posts:
As usual this is here for myself mostly. If you found that helpful good for you.
1) Download the most recent archive from https://spark.apache.org/downloads.html
wget https://downloads.apache.org/spark/spark-2.4.5/spark-2.4.5-bin-hadoop2.7.tgz
2) Extract the archive in /opt/spark/
cd /opt/sudo tar xzf spark-2.4.5-bin-hadoop2.7.tgzsudo ln -s /opt/spark-2.4.5-bin-hadoop2.7/ /opt/spark
3) Add the spark user
sudo useradd spark sudo chown -R spark:spark /opt/spark*
3) Restoring context for SELinux (instead of naively turning off of SELinux…)
sudo restorecon -rv /opt/spark*
4) Prepare two systemd script to start master and slave services and run them
/etc/systemd/system/spark-master.service
[Unit]
Description=Apache Spark Master
After=network.target
[Service]
Type=forking
User=spark
Group=spark
ExecStart=/opt/spark/sbin/start-master.sh
ExecStop=/opt/spark/sbin/stop-master.sh
[Install]
WantedBy=multi-user.target
/etc/systemd/system/spark-slave.service
[Unit]
Description=Apache Spark Slave
After=network.target
[Service]
Type=forking
User=spark
Group=spark
ExecStart=/opt/spark/sbin/start-slave.sh spark://X.Y.Z.A:7077
ExecStop=/opt/spark/sbin/stop-slave.sh
[Install]
WantedBy=multi-user.target
We can now reload the scripts and run the service.
sudo systemctl daemon-reload sudo systemctl start spark-master.service sudo systemctl start spark-slave.service
5) The server is ready, however now, instead of turning off the firewall, please update it.
The most basic way would be something like that:
sudo iptables -I INPUT 1 -i eno1 -p tcp --dport 8080 -j ACCEPT sudo iptables -I INPUT 1 -i eno1 -p tcp --dport 8081 -j ACCEPT sudo iptables -I INPUT 1 -i eno1 -p tcp --dport 7077 -j ACCEPT
If you cannot connect to the server (no name server for example) you can enforce IP to listen to.
sudo cp /opt/spark-2.4.5-bin-hadoop2.7/conf/spark-env.sh.template /opt/spark-2.4.5-bin-hadoop2.7/conf/spark-env.sh
Then using the following variables in spark-env.sh
export SPARK_MASTER_IP=X.Y.Z.A
export SPARK_MASTER_HOST=X.Y.Z.A
Voila!
So this post is in two steps, first how to build a linux module, then how to install a module when you are using secured boot environement.
You can have a simple module by writing a C file module.c such as :
#include <linux/init.h> #include <linux/module.h> #include <linux/kernel.h> int finit(void) { printk(KERN_INFO "Start module\n"); return 0; } void fexit(void) { printk(KERN_INFO "Stop module\n"); } module_init( finit ); module_exit( fexit );
Compilation is then as easy as this Makefile :
obj-m += module.o all: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules clean: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean
You can then load and unload this module using insmod and rmmod. However, if you have a secured boot setup, things get a little bit more complicated.
This time, we will need to sign modules and declare the key we used to sign modules as a reliable key.
First we generate keys:
openssl req -new -x509 -newkey rsa:2048 -keyout MOK.priv -outform DER -out MOK.der -nodes -days 36500 -subj "/CN=NAMEHERE/"
then we use the key to sign the module:
sudo /usr/src/kernels/$(uname -r)/scripts/sign-file sha256 ./MOK.priv ./MOK.der simple.ko
We also need to declare the key a reliable, at the next boot of the machine, the firmware will ask if this key is OK:
sudo mokutil --import MOK.der reboot
And voila!
Sources : https://github.com/greggagne/osc10e/tree/master/ch2 and https://stegard.net/2016/10/virtualbox-secure-boot-ubuntu-fail/
In child words, Docker is a tool to ease running « any » applications on « any » systems. Of course there is conditions and rules to follow, but to start using Docker there is no need to bother yet.
The docker story starts as root like that :
[root@localhost ~]# yum install docker Yum command has been deprecated, redirecting to '/usr/bin/dnf install docker'. See 'man dnf' and 'man yum2dnf' for more information. To transfer transaction metadata from yum to DNF, run: 'dnf install python-dnf-plugins-extras-migrate && dnf-2 migrate' Last metadata expiration check: 0:43:54 ago on Sat Aug 13 08:51:04 2016. Dependencies resolved. ==================================================================================== Package Arch Version Repository Size ==================================================================================== Installing: docker x86_64 2:1.10.3-26.git1ecb834.fc24 updates 6.7 M docker-selinux x86_64 2:1.10.3-26.git1ecb834.fc24 updates 74 k docker-v1.10-migrator x86_64 2:1.10.3-26.git1ecb834.fc24 updates 1.9 M Transaction Summary ==================================================================================== Total download size: 8.7 M Installed size: 35 M Is this ok [y/N]:
And once you say yes … you are already half way, docker is terribly easy to use ..
You need to start the service first:
# systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
# systemctl start docker
And then you can dowload an image:
> sudo docker pull "atlassian/agent-setup-git:latest" Trying to pull repository docker.io/atlassian/agent-setup-git ... latest: Pulling from docker.io/atlassian/agent-setup-git 6c123565ed5e: Pull complete 2a3a5d549d2b: Pull complete Digest: sha256:e1d2f19b296912e43eed9ea814dcfddbe68a23256791663e53316a0127ddf375 Status: Downloaded newer image for docker.io/atlassian/agent-setup-git:latest
And you can run your first command in this docker:
> sudo docker run "atlassian/agent-setup-git:latest" ls bin dev ..... sys tmp usr var
You can also start using docker configuration file which look like that:
FROM fedora:24 RUN dnf install -y gtk2-devel cmake libXmu-devel RUN mkdir /toto COPY . /toto/ RUN ls /toto
And building an image like that:
> sudo docker build -f DockerFile -t docker.io/bodman/test:fedora24 ./
Docker tends to generate a lot of files to work. The best is to manage where those files are created. With Fedora, in the configuration file /etc/sysconfig/docker find the OPTIONS parameter and add -g /path/to/store/docker/files argument.
To see infos about docker :
docker info
To increase the maximum size of docker images add in the file /etc/sysconfig/docker-storage :
DOCKER_STORAGE_OPTIONS= - -storage-opt dm.basesize=30G
This piece of code was gold for me (thanks to Harry !) :
uint32_t mxcsr; uint32_t mask = 0; //mask |= 1 << 12; // precision mask mask |= 1 << 11; // under flow mask mask |= 1 << 10; // over flow mask mask |= 1 << 9; // division by zero mask mask |= 1 << 7; // invalid operator mask mask = ~mask; asm volatile ("stmxcsr %0" : "=m"(mxcsr)); mxcsr &= mask; asm volatile ("ldmxcsr %0" : : "m"(mxcsr));
When you add it, then any division by zero (or other type of error you had to the mask) will trigger an floating point exception. This can save you a day (or two)…