Friday 14 December 2018

Gitting

Creating new Git repository (Use below steps for setup git branch based on the first EC2 server and push to Gitlab)
Below are the high level steps to create repository and perform push of contents to the respective branch in that new repository

Create repository from Gitlabs console under Web-Content group by the name as mentioned by the dev
Confirm the servers and path with the dev from where you need to push the contents
Then follow steps mentioned in the bottom to push contents onto the respective branch. Confirm branches to be created from dev
Perform these steps using credentials TT\SoftixDomainAccount  and use attached gitlab keys if it’s not there on the server
Also make sure you attached .gitignore is used
#(1) First initialize git and connect to remote branch:

git init

git remote add origin git@gitlab.softix.com:web-content/<repo_name>.git

git checkout -b <branch>

#(2) Check the git tracked file status:

git status

##Consider enabling lfs support and lfs filter

git lfs install

git lfs track "*.pdf" 

git lfs track "*.jpeg"

git lfs track "*.png"

git config http.sslverify false

#(4) Add all changed files to be commited locally:

git add *

#(5) Commit all the changed files locally :

git commit -m "Pushing Premier Prod Contents to new PROD branch"

#(7) Push the new changes to remote PROD-AU branch :

git push -u origin PROD-AU



Sync the git content on other EC2 servers (same load balancer/target group etc) after setting up the git repo from scratch via the steps shown above
git init

git remote –v

git remote add origin git@gitlab.softix.com:web-content/<repo_name>.git

git checkout -b <branch>

git lfs install

git lfs track "*.pdf"     #these files won’t have version control, but overwritten every time

git lfs track "*.jpeg"

git lfs track "*.png"

git config http.sslverify false

git fetch --all

git reset --hard origin/<branch>

git pull origin <branch>

git branch -u origin/<branch>  #If using the Powershell scripts from Git-mgmt-prod or Git-Mgmt-uat servers to manage git pulls, please ensure to run this command otherwise you will see git pull errors

Naming convention for new Git branches—Add AU/NZ
It has to be like prod-au/prod-nz/preview-au/preview-nz/uat-au/uat-nz

Prod-my / preview-my/uat-my



Git repositories that have been setup are currently kept in an Excel spreadsheet called GitRepoMapping.xlsx in this path:

\\fileserver.tt.local\Common\DevOps\For_Parikshit\Git-Setup\GitRepoMapping.xlsx



Git Auto Sync Setup after the git repo/branches have been setup from above
Currently there are a couple of different ways of doing git auto sync/pull for web contents: 1) git pull powershell scripts running locally on the web servers configured via windows scheduled task; 2) git pull powershell scripts running remotely from two Git management servers called Git-Mgmt-Prod(IP: 10.130.122.41) and Git-Mgmt-UAT (IP:10.199.122.145). Login as softixdomainaccount 3) New ways of git pull via Jenkins and Gitlab webhook configured on https://jenkins.ticketek.com.au/job/GitPull/ on the Jenkins-AutoScaleGroup server on teg-shared new aws account.

git pull powershell scripts running locally on the web servers configured via windows scheduled task
Go to windows scheduled task and see if there are some jobs configured

Gotchas:

If the local git pull is not working, ensure the scheduled task is running as tt\softixdomainaccount account, not the local admin one. Secondly, ensure the .ssh folder/files are copied there. You can copy the .ssh folder with all required files from the uat server "C:\Users\softixdomainaccount\.ssh" @ 10.199.40.93

2. git pull powershell scripts running remotely from two Git management servers
Pre-requisite:

Make sure the EC2 servers is allowing inbound 5985-5986 TCP ports from the Git-Mgmt-Prod and Git-Mgmt-UAT servers, which is required for running git pull powershell scripts remotely. Double check their security groups.

Both Git mgmt servers use the powershell aws profile to retrieve information from AWS

PS C:\Get-AWSCredential -ListProfile
WARNING: The ListProfile switch is deprecated and will be removed from a future release. Please use ListProfileDetail
instead.
GitPullProfile
default
default
access

One of the gitpullprofile should be from the Git-EC2-Readonly or the Gitpull one in the ticketek aws account.







Git-Mgmt-Prod is for production servers:
All remote git pull scripts are in the C:\Scripts\  folder.

the Working-Scripts subfolder are running ones already setup which have a correspondent scheduled task.

The Scripts-Templates subfolders are templates. Normally for production servers, we use either GitPull-Templates-TargetGroup or GitPull-Templates-ELBClassic if you can find a proper target group or load balancer for all the production server fleet. We don't use GitPull-Templates-SingleInstance normally.

To setup new auto git pull using these templates, follow these steps:

a) make a copy of "GitPull-Templates-TargetGroup" or "GitPull-Templates-ELBClassic" from the "Scripts-Templates" folder and place them into "Working-Scripts\ForTargetGroups" or "Working-Scripts\ForClassicELBs" folder.

b) Change the folder name to a distinctive and meaningful name such as AU_WhiteLabel_Premier_Mobile.

c) edit the env.ps1 file and replace $env with the folder name you just changed to such as "AU_WhiteLabel_Premier_Mobile";

change the $targetGroupName to point to the target group which includes all the EC2 servers for the web application, i.e. InvictusGames

change the $seconds, which is the running interval for the git pull scheduled task.



edit the GitPullScriptTG.ps1 file and change $gitDirectory to point to the correct IIS root folder path for the web app.

Caution: always triple check $gitDirectory to ensure the correct IIS root folder path is used!!!
E.g. $gitDirectory ="Z:\TicketekAU\WhiteLabels\MobilePremier"

Change $branch to point to the correct git branch.

Warning: Please change $branch with great caution!! Any mistake may end up in a disaster. Correct format normally is $branch="uat-au" or #$branch="preview-au"
This is CASE SENSITIVE!!!

If unsure, just cd to the root folder of the IIS site and do a git status to find the correct git branch and git remote -v to find the correct remote git repo.



d) launch a cmd in admin mode and cd to the folder where all the script files (env.ps1,GitPullScriptTG.ps1 etc) are located  and ./GitTaskGeneratorTG.ps1 to generate a scheduled task for the webapp.

e) Check the log file in C:\Scripts\Logs to see if there is any error. Get the dev to do some test commit and see if you receive slack notification in the channel git_notifications or test-channel01.

It should look like:

Git Pull Report for All 4 Targets in TargetGroup InvictusGames
**********************************************************************
4 servers with successful git pull update applied:
i-02e37dfc8ddb24783;i-0a0c90783e3c02e47;i-02c956b88711d9d8d;i-0096ac8c705677051
0 servers without new git pull update applied:

0 servers with potential git pull errors:

One sample message of successful git pull update:
-------------------------------------------------------------------
Git Pull Notification
Site: AU_WhiteLabel_Premier
InstanceID: i-02e37dfc8ddb24783
ServerIP: 10.130.40.254
DateTime: 09/05/2018 10:23:13
Author: Marion Wood <marionw@ticketek.com.au>
Last Commit ID: 288116a
Last Commit Description:
FCT-777 - Created test survey for Fastcheck
-------------------------------------------------------------------
#############################################################



Git-Mgmt-UAT is for Preview and Uat servers:
Script files are in the same folder as prod: C:\Scripts

The only difference is that normally we use C:\Scripts\Scripts-Templates templates and put them in the C:\Scripts\Working-Scripts\ForSingleServer folder as preview and uat servers normally are standalone, not clustered.

For most preview server webapps, we normally use the Hostworks preview server which is "$fqdn_Git="TKTPRPWADM01-2.tt.local", so you can copy the folder from AU-ML-MOB-PRM in the working-scripts folder and simply change the variables. Keep the $fqdn_Git="TKTPRPWADM01-2.tt.local setting if the preview site is hosted there.

Gotchas:

The two git-mgmt-xxx servers are of t2-medium instance size. We can consider increasing the ram as there are many git pull jobs running every some minutes so there might be memory/cpu spikes occasionally, causing git pull slowness.



3. New ways of git pull via Jenkins and Gitlab webhook

The above 2nd approach is currently applicable only for servers hosted on our old exisitng Tickektek account but not yet applicable for servers launched on the new teg-prod-au account, as the git management servers reside on the old exiting account and there is some permission issue if you try to remotely manage the new servers from the old management servers. Therefore, we come to the 3rd approach, git pull via the Jenkins server hosted on the new teg-shared account.

Login to the Jenkins server via https://jenkins.ticketek.com.au/

Navigate to Gitpull folder and you can see some subfolders there.

The ticketek subfolder has a template Tmplt-Prod-AU-Ticketek with permission for managing old exiting ticketek account(ID 389920326251) while the teg-prod-au one has a template teg-prod-au for the new teg-prod-au account (167471469006).

Steps for configuring the auto git pull for this:

Make sure the new EC2 servers have allowed inbound 5985-5986 Tcp ports to the Jenkins server. You can manually open the ports in the security group and add inbound 5985-5986 from subnet range 10.161.64.0/18. However, since the new aws account resources are launched via IaaC, it is advisable that this is added to the cloudformation/packer script to automate everything.

Enable the psremoting feature by running this powershell script: "powershell.exe -NoProfile -ExecutionPolicy Bypass -File \\10.130.0.215\Common\Scripts\PSRemoting\ConfigureRemotingForAnsible.ps1 -Verbose -CertValidityDays 3650 -EnableCredSSP -ForceNewSSLCert -SkipNetworkProfileCheck"     Again it is advisable that this is added to the cloudformation/packer script to automate everything.
Go to the https://jenkins.ticketek.com.au/job/GitPull/job/teg-prod-au/ folder and create a new Jenkins item/job by copying from the existing template "Tmplt-ByTag-tegProdAU", give it a meaningful name, such as Premier-premier-PRODA-Desktop
Got to the Configure page and update all the parameters for the job such as the branch, gitRootDir (The IIS gitRootDir for doing git pull to; Please put the correct git root directory here as the default value and ensure you use DOUBLE back slash \ in the windows path without quotes. i.e. C:\\testSite), key (The key of the tag for filtering the EC2 instances with, i.e. Name, Role. ###Wanrning: Please ensure this key/value returns the exact number of servers you intend to manage, as there can be UNWANTED servers included###), value(The value of the tag for filtering the EC2 instances with, i.e. Premier. ###Warning: Please ensure this key/value returns the exact number of servers you intend to manage, as there can be UNWANTED servers included###), valueName(The name for creating a temp inventory file for this job. Please provide a unique name WITHOUT ANY WHITESPACE to identify the EC2 hosts involved in this Git pull job. E.g. you can use "MemberlinkAU" if this job targets all servers for Memberlink AU. )
change the repository url to the correct one, e.g. git@gitlab.softix.com:web-content/TKTAU-Content-PR.git
Go to the Build triggers section and click on advanced, manually change the "Filter branches by name "–"include" part to match the correct git branch, so that we can enable the gitlab trigger whenenver a dev pushes new commits to the gitlab repo.
Copy this webhook url and the secret token (you can generate a new one) and paste them in the gitlab portal as shown below

Logon to gitlab and go to the webcontent repo page, click on setting/integration, paste the webhook url and security token copied from Jenkins, and save them. You can click on test to see if it returns a 200 success message.

Save all your setting in the Jenkins page and get the dev to do some test commits. Hopefully you should see some slack messages posted in the git_notifications or test-channel01 if everything is configured properly.
One important thing about using this Jenkins job for auto git pull is that we filter the EC2 servers via EC2 tags, so you need to make sure the tag key and value returns exactly the expected number of servers you would like to manage, not too many, not too few, and they are correct servers. If the tag name and key is not filtering properly, consider adding/updating some unique tags to the servers, otherwise you might accidentally git pull the wrong content to wrong servers.
If running successfully, you should see slack logs like this:
********************Git Pull Report Begins********************
- '#WebsiteName#: AU_Mobile_Premier'
- '#RootDir#: D:\TicketekAU\Powerweb\MobilePremier'
TASK [Auto Git Pull for Branch PROD-AU] ****************************************
changed: [WIN-DCNVJ2PA4M5]
changed: [WIN-MV5NIOIOUIU]
changed: [WIN-4CCANG3LSOI]
changed: [WIN-3J9G03G3HVE]

TASK [debug] *******************************************************************
ok: [WIN-MV5NIOIOUIU] =>
gitPullResult.stdout_lines:
- HEAD is now at c4328dc FCT 788 - Added file changes but not Live yet
- Already up to date.
- 'Author: Naga Rao <nagar@ticketek.com.au>'
- 'Date: Wed Sep 5 14:48:22 2018 +1000'
ok: [WIN-3J9G03G3HVE] =>
gitPullResult.stdout_lines:
- HEAD is now at c4328dc FCT 788 - Added file changes but not Live yet
- Already up to date.
- 'Author: Naga Rao <nagar@ticketek.com.au>'
- 'Date: Wed Sep 5 14:48:22 2018 +1000'
ok: [WIN-DCNVJ2PA4M5] =>
gitPullResult.stdout_lines:
- HEAD is now at c4328dc FCT 788 - Added file changes but not Live yet
- Already up to date.
- 'Author: Naga Rao <nagar@ticketek.com.au>'
- 'Date: Wed Sep 5 14:48:22 2018 +1000'
ok: [WIN-4CCANG3LSOI] =>
gitPullResult.stdout_lines:
- HEAD is now at c4328dc FCT 788 - Added file changes but not Live yet
- Already up to date.
- 'Author: Naga Rao <nagar@ticketek.com.au>'
- 'Date: Wed Sep 5 14:48:22 2018 +1000'
to retry, use: --limit @/gitPull/Infrastructure-Automation-Ansible/cloudformation/roles/GitPull/tasks/playbooks/Premier-PRODA-Mobile.retry

PLAY RECAP *********************************************************************
WIN-3J9G03G3HVE : ok=4 changed=2 unreachable=0 failed=0
WIN-4CCANG3LSOI : ok=4 changed=2 unreachable=0 failed=0
WIN-DCNVJ2PA4M5 : ok=4 changed=2 unreachable=0 failed=0
WIN-MV5NIOIOUIU : ok=4 changed=2 unreachable=0 failed=0

Gotchas:

If you see errors like below, ensure tt\softixdomainaccount is a member of local administrators for the target servers (apart from firewall port)

kerberos: HTTPSConnectionPool(host='win-3cfv8ejkcvn', port=5986): Max retries exceeded with url: /wsman (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f1acc6da7d0>, 'Connection to win-3cfv8ejkcvn timed out. (connect timeout=30)')

If you see this error on Jenkins web portal or Jenkins server cli, run "chown -R jenkins:jenkins /gitPull" to change ownership

[WARNING]: Could not create retry file '/gitPull/Infrastructure-Automation-
Ansible/cloudformation/roles/GitPull/tasks/playbooks/Premier-PRODA.retry'. [Errno 13] Permission denied: u'/gitPull
/Infrastructure-Automation-Ansible/cloudformation/roles/GitPull/tasks/playbooks/Premier-PRODA.retry'

Note:

The jenkins server does not require IAM user credentials as it already has EC2 IAM role attached to it, which allows it to assume poweruser role for doing gitpull across different accounts.



Multi-Boot Strap Powershell again

<powershell>
$Role = "Powerweb"
$Environment = "PROD-A"
$DeployScriptPath = '\\10.130.0.215\temp_share\tmp\userdata-scripts'
New-Item C:\Install -type directory
Copy-Item $DeployScriptPath\ec2-userdata-13082018.ps1 C:\Install
& C:\Install\ec2-userdata-13082018.ps1.ps1 -Role $Role -Environment $Environment

</powershell>

<powershell>
Param(
   [string]$Role, [string]$Environment
)

Set-ExecutionPolicy -Force Unrestricted

#Set source directory for install and pre-requisite files
#
$source = "C:\Deployment"
if(!(Test-Path -Path $source )){
    New-Item -ItemType directory -Path $source
}

#start Logging
Start-Transcript -Path $source\Bootstrap.txt

#initialize new Volumes Win 2016
if((Test-Path -Path C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts)){
    C:\ProgramData\Amazon\EC2-Windows\Launch\Scripts\InitializeDisks.ps1
}

If (Test-Path -Path "D:\"){
  Set-Volume -DriveLetter D -NewFileSystemLabel Data
}

If (Test-Path "E:\"){
  Set-Volume -DriveLetter E -NewFileSystemLabel Logs
}

#Install Windows Tools and Features
#
Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force
Install-Module -Name 'Carbon' -AllowClobber -Force

Import-Module ServerManager

Install-WindowsFeature Web-Server,Web-Http-Redirect,Web-Custom-Logging,Web-Log-Libraries,Web-Request-Monitor,Web-Dyn-Compression,Web-Basic-Auth,Web-IP-Security,Web-Url-Auth,Web-Windows-Auth,Web-Net-Ext45,Web-Asp-Net45,Web-AppInit,Web-ISAPI-Ext,Web-ISAPI-Filter,Web-Mgmt-Console,Web-Mgmt-Tools,Web-Mgmt-Compat,Web-Metabase,Web-Lgcy-Mgmt-Console,Web-Lgcy-Scripting,Web-WMI,Web-Scripting-Tools,Web-Mgmt-Service,smtp-server
Add-WindowsFeature NET-WCF-HTTP-Activation45, NET-HTTP-Activation

Import-Module 'Carbon'
Import-Module AWSPowerShell

#Copy pre-requisite Files and Scripts
#
robocopy /MIR /NFL /NDL /Z \\10.130.0.215\Common\Scripts\Common $source
robocopy /MIR /NFL /NDL /Z \\10.130.0.215\temp_share\tmp\configs $source\API
robocopy /MIR /NFL /NDL /Z \\10.130.0.215\Common\Tools\AppDynamics\dotNetAgent $source\AppDynamics

Copy-Item \\10.130.0.215\Common\Tools\ConnectPerformanceCounterCategoryInstall.exe $source -Verbose

#Functions will be split from the main file and used later
#
#Import Functions
#. $source\IIS.ps1
#. $source\Misc.ps1
. $source\InstallOctopus.ps1
#. $source\Functions.ps1


#Set default region for AWS cli tools
#
$instance_az = Invoke-RestMethod -uri http://169.254.169.254/latest/meta-data/placement/availability-zone
$instance_region = $instance_az.Substring(0,$instance_az.Length-1)
Set-DefaultAWSRegion -Region $instance_region

#Update Hostname Tag in AWS
#
$hostname = $env:COMPUTERNAME
$aws_instance =  Invoke-RestMethod -uri http://169.254.169.254/latest/meta-data/instance-id

$tag = New-Object Amazon.EC2.Model.Tag
$tag.Key = "Hostname"
$tag.Value = $hostname
New-EC2Tag -Resource $aws_instance -Tag $tag

### Will have a join domain function in CFN
#
#Add instance to AD
$username = 'tt.local\softixdomainaccount'
$password = 'PA$$word70'
$secstr = New-Object -TypeName System.Security.SecureString
$password.ToCharArray() | ForEach-Object {$secstr.AppendChar($_)}
$cred = New-Object -typename System.Management.Automation.PSCredential -argumentlist $username, $secstr
Add-Computer -DomainName tt.local -OUPath "OU=AWS,OU=Computers,OU=TT-Managed,DC=tt,DC=local" -Force -Credential $cred
"complete join to domain"

### Add domain users to local Administrators
#
$users = @("softixdomainaccount","Appadmin Security","TT Server Desktop Administrators")

$group = [ADSI]("WinNT://$hostname/Administrators,group")
$groupname = $group.PSBase.Name

<#
#
#$user1 = [ADSI]("WinNT://tt.local/softixdomainaccount")
#$user2 = [ADSI]("WinNT://tt.local/Appadmin Security")
#$$user3 = [ADSI]("WinNT://tt.local/TT Server Desktop Administrators")
#$users = $user1,$user2,$user3
#"add users to admin group"
#"$user1"
#"$user2"
#"$user3"
#
#>

$membersObj = @($group.psbase.Invoke("Members"))
$members = ($membersObj | foreach {$_.GetType().InvokeMember("Name", 'GetProperty', $null, $_, $null)})

ForEach ($user in $users) {
$userads = [ADSI]("WinNT://tt.local/$user")
$name = $userads.PSBase.Name
If ($members -contains $userads.PSBase.Name ){
     Write-Host "$name exists in the group $groupname"
}
Else {
       Write-Host "$name not exists in the group $groupname"
       "add users to admin group"
       "$name has been added to the $group"
       $group.PSBase.Invoke("Add",$userads.PSBase.Path)
}
}

#Set instance System Locale
#
Import-Module International

tzutil /s "AUS Eastern Standard Time"

$currentlist = Get-WinUserLanguageList
$currentlist | ForEach-Object {if(($_.LanguageTag -ne "en-AU") -and ($_.LanguageTag -ne "en-US")){exit}}

Set-WinUserLanguageList en-AU -Force
Set-WinSystemLocale en-AU
Set-Culture en-AU

#Enable Remoting
#
Enable-PSRemoting

#Create Default Site Directories
New-Item D:\Ticketek\ -type directory
New-Item E:\Weblogs -type directory
New-Item E:\Logs -type directory  # to do, work out how to do permissions to .\IIS_IUSRS
New-Item D:\Common\LoadBalancer -type directory
New-Item D:\Common\LoadBalancer -Name "index.html" -type file

#Change ACL for new volumes
#
$Paths = @("E:\","D:\")

ForEach ($Path in $Paths) {
  $Acl = (Get-Item $Path).GetAccessControl('Access')
  $Ar = New-Object  system.security.accesscontrol.filesystemaccessrule("everyone","modify","ContainerInherit,Objectinherit","none","Allow")
  $Acl.SetAccessRule($Ar)
  $Acl | Set-Acl $Path
}


Remove-WebAppPool '.NET v4.5'
Remove-WebAppPool '.NET v4.5 Classic'

Set-WebConfigurationProperty "/system.applicationHost/sites/siteDefaults" -name logfile.directory -value E:\Weblogs

$p = (Get-Item IIS:\AppPools\DefaultAppPool)
$p.managedRunTimeVersion = ''
$p | Set-Item

Stop-Website -name 'Default Web Site'
Rename-Item 'IIS:\Sites\Default Web Site' 'Load Balancer'
Set-ItemProperty 'IIS:\Sites\Load Balancer' -Name bindings -Value @{protocol="http";bindingInformation="*:8000:"}
Set-ItemProperty 'IIS:\Sites\Load Balancer' -Name physicalPath -Value D:\Common\LoadBalancer
Set-ItemProperty 'IIS:\Sites\Load Balancer' -Name applicationPool -Value 'DefaultAppPool'
Set-ItemProperty 'IIS:\Sites\Load Balancer' -Name id -Value 1
Start-Website -name 'Load Balancer'

New-NetFirewallRule -DisplayName 'Load Balancer' -Direction Inbound -Protocol TCP -LocalPort 8000 -Action Allow
New-NetFirewallRule -DisplayName 'Octopus Tentacle' -Direction Inbound -Protocol TCP -LocalPort 10933 -Action Allow
New-NetFirewallRule -DisplayName 'SMB' -Direction Inbound -Protocol TCP -LocalPort 445 -Action Allow


Copy-Item \\10.130.0.215\Common\Utilities\IncreasePerformanceCounters.ps1 $source -Verbose

$regfiles = @("db_aliases-production.reg","localisation_au.reg","tls_1_2_change_for_NET_Apps.reg")
$utilpath = "\\10.130.0.215\Common\Utilities"
Foreach ($regfile in $regfiles)
{
  Copy-Item $utilpath\$regfile $source -Verbose
  regedit /s $source\$regfile
}

C:\Deployment\IncreasePerformanceCounters.ps1

##Install Chrome and notepad++
write-host "########## Installing Git, Notepad++ and Chrome ##########"

$nppUrl = "https://notepad-plus-plus.org/repository/7.x/7.5.8/npp.7.5.8.Installer.x64.exe"
$chromeUrl = "http://dl.google.com/chrome/install/375.126/chrome_installer.exe"
$nppoutput = "C:\Deployment\npp.exe"
$chromeoutput = "C:\Deployment\ChromeSetup.exe"
$wc = New-Object System.Net.WebClient
$wc.Headers.Add("user-agent", "PowerShell")
$wc.DownloadFile($nppUrl, $nppoutput)
$wc.DownloadFile($chromeUrl, $chromeoutput)

Start-Process -FilePath "C:\Deployment\npp.exe" /S -Wait
Start-Process -FilePath "C:\Deployment\ChromeSetup.exe" -Args "/silent /install" -Verb RunAs -Wait
#end installation

## Install connect Performance Counter for all App with connection to the Origin
#
C:\Deployment\ConnectPerformanceCounterCategoryInstall.exe
#end installation

#Set config path for SumoLogic and AppDynamics
#
$rolecfg = $Role.replace(' ','')
$configPath = "C:\Deployment\API\$rolecfg"

#Installing and configuring Sumo Logic
#
write-host "########## Install Sumologic Collector ##########"
#
New-Item -ItemType directory -Path $source\Sumo
[System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]'Tls,Tls11,Tls12'
Invoke-WebRequest 'https://collectors.sumologic.com/rest/download/win64' -outfile 'C:\Deployment\Sumo\SumoCollector.exe'

C:\Deployment\Sumo\SumoCollector.exe -console -q "-VskipRegistration=true" "-Vsumo.accessid=suf85JGx5cAUUq" "-Vsumo.accesskey=p32H3qFUErPdSQuxsgVBVT6VeVT6aXKqm1e3sRebNiWgJ95QUB8lYfKkWMmn3E2W" "-Vsources=" + $configPath + "\\" + $rolecfg + ".json"
timeout 30
start-service sumo-collector

#Installing and configuring AppDynamics
#
write-host " Installing App Dynamics..."
#
#Installs the .NET and Machine AppDynamics Agents
#Function requires the config parameter config to be specified. This dictates which .xml configuration file it will use to create the correct Application Tier for the dotNetAgent.
#
#Install-AppDynamics -config $Role
#

$tempDir = "C:\windows\temp"
$sourcePath = "C:\Deployment\AppDynamics"
$installer = "$sourcePath\dotNetAgentSetup.msi"
$configFile = $configPath + "\" + $rolecfg + ".xml"

$options = "/q /norestart /l $tempDir\AgentInstaller.log AD_SetupFile=$configFile"
$msiInstall = "/i $installer $options"
echo "start-process msiexec $msiInstall -wait"
start-process msiexec $msiInstall -wait
iisreset /noforce

## disabling Real Time Scan for Windows Defender - Applicable only for Windows 16 (Defender is not available on Windows 12)
Set-MpPreference -DisableRealtimeMonitoring 1 #toggle Real-time Protection ON/OFF
$preferences = Get-MpPreference #gets preferences for the Windows Defender scans and updates
$status = $preferences.DisableRealtimeMonitoring #store current status of Real-time Protection in $status
Write-host " "
if ($status) {
   Write-host "Real-time Protection is  OFF"
} Else {
   Write-host "Real-time Protection is  ON"
}
Write-host " "

####*********** SOME EXTRAS ***********###

#### Install a tentacle and create a role for Octopus server Using Cloud Formation ####

If (($Role -ne "NONE") -and ($Environment -ne "NONE")) {
Write-Host "Role is " $Role
Write-Host "Environment is " $Environment
InstallOctopus -Role $Role -Environments $Environment
}

# Stop logging
Stop-Transcript

Restart-Computer -Force

</powershell>

Thursday 22 November 2018

Pi Network boot and PXR booting Centos via Ubuntu

Introduction[edit]

We're going to need a server, it will offer kernel boot images over TFTP, then filesystems over NFS. We'll used one SD card to convert Pi's, which we call "Magic Card" and another SD card will be the "Gold image" which will become the kernel and filesystem later used by the network client.

Setting up DHCP/TFTP[edit]

Install DHCP server ( Needs to be done on Server )[edit]

  • for Debian based system
sudo apt-get install isc-dhcp-server
  • for RHEL based system
sudo yum -y install dhcp
In both cases the config will be as follow:
subnet 192.168.0.0 netmask 255.255.255.0 {
 range 192.168.0.20 192.168.0.250;
 option broadcast-address 192.168.0.255;
 option routers 192.168.0.1;
 option subnet-mask 255.255.255.0;
 option domain-name "pxe.server";
 option domain-name-servers 10.161.0.1, 192.168.0.99, 8.8.8.8;
 next-server 192.168.0.15;
 option tftp-server-name "192.168.0.15";
 filename "pxelinux.0";
}

Install TFTP server ( Needs to be done on Server )[edit]

  • for Debian based system
sudo apt-get install -y syslinux tftpd tftp pxelinux
  • for RHEL based system
sudo yum -y install syslinux xinetd tftp-server
Edit the file /etc/xinetd.d/tftp:
service tftp
{
protocol        = udp
port            = 69
socket_type     = dgram
wait            = yes
user            = nobody
server          = /usr/sbin/in.tftpd
server_args     = /tftpboot
disable         = no
}
Start the relevant services as:
sudo systemctl start xinetd 
sudo systemctl enable xinetd

Install DNSMASQ server ( Needs to be done on Server )[edit]

Incase of existing DHCP server, we need a dhcp server which can work in proxy mode to fit into the existing system, here are the steps to achieve the said task:
  • for Debian based system
sudo apt -y install dnsmasq
  • for RHEL based system
sudo yum -y install dnsmasq
Configure DNS MASQ to work with existing system:
sudo nano /etc/dnsmasq.conf
Copy paste the following configuration:
port=0
dhcp-range=192.168.0.0,proxy
log-dhcp
enable-tftp
tftp-root=/tftpboot
pxe-service=0,"Raspberry Pi Boot"
dhcp-boot=pxelinux.0,pxeserver,192.168.0.15
pxe-service=X86PC, "Boot BIOS PXE", pxelinux.0
Edit the following file:
sudo vim /etc/default/dnsmasq
Enter the following entry in the last line:
DNSMASQ_EXCEPT=lo
Enable and restart the service:
systemctl enable dnsmasq
systemctl restart dnsmasq

Creating boot sequence[edit]

Preparing SD card[edit]

  • Flash the sd card with normal rasbian lite image from the following URL:
https://downloads.raspberrypi.org/raspbian_lite_latest
  • Once that is done, we need to install small package on the sd card:
sudo apt update && sudo apt upgrade -y
apt install ./eyemagnet-magiccard_0.1_armhf.deb
  • Once complete reboot the pi.
  • Re-Login and you will be presented with options.(The package is avaiable on Web-00 server in the unstable repo) Ignore for now and quit to complete the setup.

Serve kernel file[edit]

Using the same sdcard that we prepared for network booting and copy the boot directory to the tftp server.
All the contents of that (boot) directory of the sdcard shall be available in /tftpboot/workingbootcode directory on the server.
Once complete go the tftp server root directory and issue the following command:
ln -s workingbootcode/bootcode.bin .
This will setup the tftp server to start serving the kernel.

Modifying boot file[edit]

We need to edit the main cmdline.txt in the workingbootcode directory on the tftp server to reflect our nfs root which will be mounted when the pi boots up, this is as follows:
selinux=0 dwc_otg.lpm_enable=0 rootwait rw nfsroot=192.168.0.15:/nfs/image,v3 ip=dhcp root=/dev/nfs console=tty1 elevator=deadline logo.nologo vt.global_cursor_default=0 plymouth.enable=0
  • All the steps can be done via staying inside the pi, once complete shutdown this pi.

Creating the Main Image[edit]

'Clone' the filesystem[edit]

  • Needs to be done a freshly built pi ( To be done on a completely separate pi )
Disable Swap
sudo dphys-swapfile swapoff
sudo dphys-swapfile uninstall
sudo update-rc.d dphys-swapfile remove
Once the Pi has rebooted, locally or via SSH, run:
sudo mkdir -p /nfs/image
sudo rsync -xa --progress --exclude /nfs / /nfs/image
Note: You must do this on a running Pi, copying off the SDK card on another host did not appear to work.
cd /nfs/image
sudo mount --bind /dev dev
sudo mount --bind /sys sys
sudo mount --bind /proc proc
sudo umount dev
sudo umount sys
sudo umount proc
Remove the duplicated swap file if it exists:
sudo rm /nfs/image/var/swap
Create a tar archive of the nfs folder (The -p flag should preserve permissions etc.)
sudo tar -cpf /nfs.tar /nfs
Finally this archive needs to end up on the Server.
rsync -zavP /nfs.tar <Username>@<serverip>:/home/nfs.tar

Setting up NFS[edit]

Nfs server ( Needs to be done on Server )[edit]

Install the nfs service
  • for Debian based system
sudo apt-get install -y nfs-kernel-server
  • for RHEL based system
sudo yum -y install nfs-utils

Serve Image ( Needs to be done on Server )[edit]

Now go the home directory and issue the following command:
sudo tar --same-owner -xvf nfs.tar
Remove all other entries from the fstab file /nfs/image/etc/fstab so that it only contains the proc entry as follows:
proc            /proc           proc    defaults          0       0
Now move the content:
sudo mv nfs /
Edit the exports file:
/nfs/image *(rw,sync,no_subtree_check,no_root_squash)
Start service:
  • for Debian based system
sudo systemctl enable rpcbind nfs-kernel-server
sudo systemctl restart rpcbind nfs-kernel-server
  • for RHEL based system
sudo systemctl start rpcbind nfs-server 
sudo systemctl enable rpcbind nfs-server
Issue the folliwng command to make it active:
exportfs -rv

Pi Identity[edit]

Final boot[edit]

Now boot the pi with the same magic pi card, you will be greeted as follows)
Welcome to Pi Nework boot creator...
Boot mode fine :) (17:3020000a)
Enter the Remote TFTP server IP...: 192.168.0.15
Please enter your choice:
1) Create Image on Server and Reboot       
2) Just reboot       
3) Shutdown          
4) Remove Image from Server
5) Check firmware        
6) Update Firmware to enable Network Boot
7) Quit
Select Option 1 to create the identity.
Once complete it will make the pi reboot automatically.
Re-login to the pi and you will be present with the same display, but this time select Option 3.
Slide the magic card out and boot.
The pi will boot of the network server.

Thursday 8 November 2018

PXE ready a Pi

#!/bin/bash

function imagemaker {
echo "Your Device Identity is $(tput setaf 1)$(tput bold)$1$(tput sgr0)"
echo "Your Device IP Address after network boot will be: $(tput setaf 1)$(tput bold)$2$(tput sgr0)"
if [ ! -d /tftpboot/$1 ]
  then
mkdir -p /tftpboot/$1
cd /tftpboot/$1
ln -s ../workingbootcode/* .
cp cmdline.txt cmdline.txt.default
rm -f cmdline.txt
cat cmdline.txt.default | sed s/westpac/$1/g > cmdline.txt
rm -f cmdline.txt.default
mkdir /nfs/$1
cd /nfs/$1
echo
echo "$(tput setaf 3)$(tput blink)Now Creating Image, Please wait...$(tput sgr0)"
cp -rp /nfs/westpac/* .
echo
  echo "$(tput setaf 4)Image has been built on Server$(tput sgr0)"
  sleep 2s
  echo
  echo "$(tput setaf 5)Image building succesfull - please shutdown the pi after logging back in to complete network boot setup!$(tput sgr0)"
  echo "/nfs/$1 *(rw,sync,no_subtree_check,no_root_squash)" >> /etc/exports
  exportfs -r
sleep 2s
else
echo
  echo "$(tput setaf 1)$(tput blink)Image exists...qutting!$(tput sgr0)"
echo
sleep 2s
fi
exit 0
exit 0
}

function imageremove {
echo "Your Device Identity is $(tput setaf 1)$(tput bold)$1$(tput sgr0)"
if [ -d /tftpboot/$1 ]; then
echo
  echo "$(tput setaf 1)$(tput blink)Image exists...Removing it!$(tput sgr0)"
  exportfs -u *:/nfs/$1
sed -i /$1/d /etc/exports
rm -fr /tftpboot/$1
  rm -fr /nfs/$1
  exportfs -r
echo
echo "$(tput setaf 2)Image has been removed sucessfully...Bye!$(tput sgr0)"
echo
sleep 2s
  else
echo
  echo "$(tput setaf 1)$(tput blink)Image doesnt exists...Quitting!$(tput sgr0)"
sleep 2s
sed -i /$1/d /etc/exports
exportfs -r
echo
  fi
exit 0
}

sn[0]=$(tail -1 /proc/cpuinfo | grep -o ".\{8\}$")
IP=$(hostname -I)
echo
echo "$(tput setaf 1)$(tput bold)Welcome to Pi Nework boot creator...$(tput sgr0)"
echo
CHECK=$(vcgencmd otp_dump | grep 17:)

if [[ "$CHECK" == "17:3020000a" ]]; then
  echo "Boot mode fine :) ($CHECK)"
else
  echo "!!! Boot mode failed, value: $CHECK"
fi
read -e -p "Enter the Remote TFTP server IP...: " serverip
echo
echo 'Please enter your choice: '
echo

options=("Create Image on Server and Reboot" "Just reboot" "Shutdown" "Remove Image from Server" "Check firmware" "Update Firmware to enable Network Boot" "Quit")
select opt in "${options[@]}"
do
    case $opt in
        "Create Image on Server and Reboot")
            echo
    echo "You chose Option 1 ($(tput setaf 5)Please Wait - Loading you details to server$(tput sgr0))"
    index=$(printf "%1s"$sn)
    sshpass -p 'qwe' ssh -t -t -o StrictHostKeyChecking=no root@$serverip "`declare -f imagemaker`; imagemaker "$index" "$IP""
    sudo reboot
    break;;
        "Just reboot")
    echo
            echo "You chose Option 2"
            sudo reboot
            break;;
        "Shutdown")
    echo
            echo "You chose $REPLY which is $opt"
            sudo shutdown now
    break;;
        "Remove Image from Server")
    echo
    index1=$(printf "%1s"$sn)
    sshpass -p 'qwe' ssh -t -t -o StrictHostKeyChecking=no root@$serverip "`declare -f imageremove`; imageremove "$index1""
    ;;
        "Check firmware")
    echo
    CHECK=$(vcgencmd otp_dump | grep 17:)
    if [[ "$CHECK" == "17:3020000a" ]]; then
      echo "Boot mode fine :) ($CHECK)"
    else
      echo "!!! Boot mode failed, value: $CHECK"
    fi
    ;;
        "Update Firmware to enable Network Boot")
    echo
            echo "You chose $REPLY which is $opt"
    sudo apt-get install rpi-update && sudo rpi-update
    ;;
        "Quit")
            break
            ;;
        *) echo "invalid option $REPLY";;
    esac
done

Wednesday 17 October 2018

Ansible Playbooks ( automating certain tasks )

tasks/main.yml

---
 - name: copying file with content matching
   copy:
    src: {{ item.name }}
    dest: {{ item.destination }}
   with_items: '{{ fileover }}'
 
 - name: insert string
   lineinfile:
    path: /etc/openvpn/{{ item }}
    regexp: 'remote vpn.eyemagnet.net 1194'
    line: 'remote 202.160.117.202 1194'
    backrefs: yes
   with_items: '{{ vpnpaths }}'

 - name: replace string
   replace:
    path: /etc/openvpn/{{ item }}
    regexp: '1197'
    replace: "1194"
   with_items: '{{ vpnpaths }}'

 - name: Installing packages
   apt:
    name: eyemagnet-rpi-splashscreen
    update_cache: yes
    allow_unauthenticated: yes
    force: yes

vars/main.yml

---
fileover:
  - name: desktop-items-0.conf
    destination: /home/pi/.config/pcmanfm/LXDE-pi
  - name: splash.jpg
    destination: /etc
  - name: config.txt
    destination: /boot
  - name: default.mpegts
    destination: /home/pi
  - name: select-committee
    destination: /var/www
  - name: cmdline.txt
    destination: /boot
  - name: 93eyemagnet_media
    destination: /etc/cron.d
  - name: fstab
    destination: /etc
  - name: rc.local
    destination: /etc

vpnpaths:
  - client-nz.conf
  - client-nz.conf.aws
  - client-nz.conf.nz.vpn

OpenVpn key generation

#!/bin/bash

function buildkey {
cd /etc/openvpn/easy-rsa
./eyemagnet-build-key $file
}

function ipselect {
read -e -p "Enter IP series P.N. (10.161) is all ready there - only enter 3rd octet: " ip
ip2="10.161.$ip"
grep -rnw '/etc/openvpn/ccd/' -e $ip2 | awk '{print $(NF-1)}' | sort -V
}

function setip {
read -e -p "Enter Ip value (Last octet) - New selection is done automatically: " ipslt
ipslteven=$(($ipslt + 2))
ipsltodd=$(($ipslt + 3))
ipslt2="$ip2.$ipslteven"
ipslt3="$ip2.$ipsltodd"
cd /etc/openvpn/ccd
cat <<- EOF > /etc/openvpn/ccd/$file
ifconfig-push $ipslt2 $ipslt3
EOF
cat /etc/openvpn/ccd/$file
}

cd /etc/openvpn/easy-rsa/keys
echo ""
read -e -p "Enter proposed Client FDQN: " file
echo ""

file2="$file.crt"
path=/etc/openvpn/keys/

if [ ! -f "$file2" ]
then
echo "$file file not found"
buildkey
ipselect
setip
/usr/local/sbin/add_client_to_domain $file $ipslt2
cd /etc/openvpn/easy-rsa/keys
sshpass -p 'mypassword' rsync -zavP $file.* root@lite-builder.office.wellington.nz.vpn:/home/lite_builder/svn/raspberry_pi/sd_installer/vpnkeys/
read -e -p "Enter the Host FQDN or IP Address: " hostaddr
echo ""
sshpass -p 'mypassword' rsync -e "ssh -o StrictHostKeyChecking=no" -zavP $file.* pi@$hostaddr:/home/pi
sshpass -p 'mypassword' ssh -t -t -o StrictHostKeyChecking=no pi@$hostaddr sudo -i "bash -s" -- < /home/em_naveed/domainer.sh "$file" "$path"
sshpass -p 'mypassword' ssh -t -t -o StrictHostKeyChecking=no pi@$hostaddr 'sudo reboot'
echo ""
echo "Job Complete"
exit
else
echo "$file2 exist or empty."
        echo ""
fi

--------------------------------------------------------------------------------------------------------------------------

./domainer.sh:

cd /home/pi
yes | mv $1.* $2
rm -fr $1.*
cd $2
rm -fr default.*
cd /etc/openvpn/
sed -i 99s/.*/"cert    \/etc\/openvpn\/keys\/"$1".crt"/ client-*.conf
sed -i 100s/.*/"key     \/etc\/openvpn\/keys\/"$1".key"/ client-*.conf
sed -i "1s/.*/$1/" /etc/hostname
grep -q '127.0.1.2.*' /etc/hosts && sed -i "s/127.0.1.2.*/127.0.1.2       $1/" /etc/hosts || echo "127.0.1.2       $1" >> /etc/hosts
exit 0
exit 0


More smaller source code packaging

#!/bin/bash
cd raspbian-stretch/eyemagnet-monitoring-nagios-naveed
var1=$(awk 'NR==1{print $2}' debian/changelog | head -c 4 | tail -c 1)
var2=$(awk 'NR==1{print $2}' debian/changelog | head -c 2 | tail -c 1)
var3=$(($var1 + 1))
if [ $var3 -gt 9 ]
then
  var2=$(($var2 + 1))
  var3=0
fi
version="$var2.$var3"
echo "New version is $version"
echo
echo "Preparing package release eyemagnet-monitoring-nagios-$version"
echo
> debian/changelog

cat>>debian/changelog <<EOF
eyemagnet-monitoring-nagios ($version) unstable; urgency=medium

  * Initial Release.

 -- Naveed Sheikh <naveed.sheikh@eyemagnet.com>  $(date -R)
EOF

dpkg-buildpackage -uc -us
rm -fr debian/eyemagnet-monitoring-nagios
exit 0

Tuesday 9 October 2018

Apt Indexer

#!/bin/bash

dpkg-scanpackages -m pool >  dists/trusty/main/binary-armhf/Packages
cat dists/trusty/main/binary-armhf/Packages | gzip -9c > dists/trusty/main/binary-armhf/Packages.gz

PKGS=$(wc -c dists/trusty/main/binary-armhf/Packages)
PKGS_GZ=$(wc -c  dists/trusty/main/binary-armhf/Packages.gz)
cat > dists/trusty/Release << EOF
Suite: trusty
Architectures: all
Date: $(date -Ru)
MD5Sum:
 $(md5sum dists/trusty/main/binary-armhf/Packages  | cut -d" " -f1) $PKGS
 $(md5sum dists/trusty/main/binary-armhf/Packages.gz  | cut -d" " -f1) $PKGS_GZ
SHA256:
 $(sha256sum dists/trusty/main/binary-armhf/Packages | cut -d" " -f1) $PKGS
 $(sha256sum dists/trusty/main/binary-armhf/Packages.gz | cut -d" " -f1) $PKGS_GZ
EOF

sleep 3s

cd dists/trusty

gpg --yes -abs -o Release.gpg Release

# Sign!
gpg --yes --batch --passphrase mypassword --digest-algo SHA256 --armor --output Release.gpg --detach-sign Release
gpg --yes --batch --passphrase mypasword --digest-algo SHA256 --clearsign --output InRelease Release

cd -

Automating the repo signing:

#!/bin/sh

dir1="/var/www/raspbian-stretch/staging/"
dir2="/var/www/raspbian-stretch/unstable/"
dir3="/var/www/raspbian-stretch/legacy/"
dir4="/var/www/raspbian-stretch/stable/"

monitor() {
while inotifywait -qqe modify,move,create,delete,delete_self "$1"; do
    cd "$1"
    ./indexer.sh 2>&1>/dev/null
    cd -
done 2>&1>/dev/null
}

monitor "$dir1" &
monitor "$dir2" &
monitor "$dir3" &
monitor "$dir4" &

Saturday 6 October 2018

Apt repo on Centos 7

rsync -zavP raspbian-stretch nash@naveed2.user.nz.vpn:/home/nash/webrepos/

yum install httpd -y

chmod -R 755 /var/www

gpg --gen-key

gpg -k

gpg --edit-key CE123456

showpref

setpref AES256 AES192 AES CAST5 3DES IDEA SHA256 SHA384 SHA512 SHA224 ZLIB BZIP2 ZIP Uncompressed

gpg --export -a CE123456 > /home/repo.key

gpg --no-default-keyring --keyring /var/www/apt/myrepo.gpg --import /home/repo.key

cp /home/repo.key /var/www/html/

mkdir -p /var/www/html/apt-repo/

touch /var/www/html/apt-repo/indexer.sh

cat > /var/www/html/apt-repo/indexer.sh << EOFSH

#!/bin/bash

dpkg-scanpackages -m . > Packages
cat Packages | gzip -9c > Packages.gz

PKGS=$(wc -c Packages)
PKGS_GZ=$(wc -c Packages.gz)
cat > Release << EOF
Architectures: all
Date: $(date -Ru)
MD5Sum:
$(md5sum Packages  | cut -d" " -f1) $PKGS
$(md5sum Packages.gz  | cut -d" " -f1) $PKGS_GZ
SHA256:
$(sha256sum Packages | cut -d" " -f1) $PKGS
$(sha256sum Packages.gz | cut -d" " -f1) $PKGS_GZ
EOF

sleep 3

gpg --yes --digest-algo SHA256 --armor --output Release.gpg --detach-sign Release
gpg --yes --digest-algo SHA256 --clearsign --output InRelease Release
EOFSH

chmod 755 /var/www/html/apt-repo/indexer.sh

cp /tmp/deb/stable/*.deb /var/www/html/apt-repo/

./var/www/html/apt-repo/indexer.sh

apt-key adv --keyserver http://192.168.201.121/repo.key --recv-keys CE123456

Or:

wget -qO - http://192.168.201.121/myrepo.gpg | sudo apt-key add -

Or

cd /etc/apt/trusted.gpg.d/

wget http://192.168.201.121/myrepo.gpg

apt install software-properties-common

add-apt-repository "deb http://192.168.201.121/raspbian stretch-stable main"

Or:

echo "deb http://192.168.201.121/apt-repo/ / " > /etc/apt/sources.list.d/new-repo.list

apt-get update

Tuesday 2 October 2018

Shorter Method of re-packaging debian build

#!/bin/bash

var1=$(awk 'NR==2{print $2}' raspbian-stretch/eyemagnet-monitoring-nagios-naveed2/DEBIAN/control | head -c 3 | tail -c 1)
var2=$(awk 'NR==2{print $2}' raspbian-stretch/eyemagnet-monitoring-nagios-naveed2/DEBIAN/control | head -c 1)
var3=$(($var1 + 1))
if [ $var3 -gt 9 ]
then
  var2=$(($var2 + 1))
  var3=0
fi
version="$var2.$var3"
echo
echo "New version is $version"
sed -i "2s/.*/Version: $version/" raspbian-stretch/eyemagnet-monitoring-nagios-naveed2/DEBIAN/control
echo
dpkg-deb -Z xz -b raspbian-stretch/eyemagnet-monitoring-nagios-naveed2/ .
echo
exit 0

Sunday 30 September 2018

Customize Debian Package

#!/bin/bash

mkdir -p buildirectory/sourcecode
cd buildirectory/sourcecode
dh_make -i -n -p rash_$1 -y
rm -f debian/*.EX debian/*.ex debian/README*
mkdir files/
touch files/nrpe.cfg.default
touch debian/install debian/postinst debian/postrm
printf "files/* tmp" > debian/install
printf "files/etc/* etc" > debian/install
 > debian/postrm

cat >>debian/postrm <<EOF
#!/bin/bash
rm -fr /etc/nagios/nrpe.cfg.installed
EOF

 > debian/changelog

cat>>debian/changelog <<EOF
rash ($1) stable; urgency=medium

  * Initial build

 -- Naveed Sheikh <naveed@nash.com>  $(date -R)
EOF

 > debian/control

cat >>debian/control <<EOF
Source: rash
Section: installation
Priority: optional
Maintainer: Naveed Sheikh <naveed@nash.com>
Build-Depends: debhelper (>= 10)
Standards-Version: 4.1.2
Homepage: www.home.com

Package: rash
Architecture: all
Pre-Depends: nagios-nrpe-server
Depends: \${misc:Depends}
Description: Installing nrpe config file
EOF

 > debian/postinst

cat >>debian/postinst<<EOF
#!/bin/sh
after_upgrade() {
    :
#!/bin/bash
if [ -e /etc/nagios/nrpe.cfg.installed ]
then
    rm -fr /tmp/nrpe.cfg.default
    exit 0
else
    cd /etc/nagios/
    cat /tmp/nrpe.cfg.default > nrpe.cfg
    mv /tmp/nrpe.cfg.default nrpe.cfg.installed
    rm -fr /tmp/nrpe.cfg.default
fi
}

after_install() {
    :
#!/bin/bash
    cd /etc/nagios/
    mv nrpe.cfg nrpe.original.backup
    cat /tmp/nrpe.cfg.default > nrpe.cfg
    mv /tmp/nrpe.cfg.default nrpe.cfg.installed
    rm -fr nrpe.original.backup /tmp/nrpe.cfg.default
}

if [ "${1}" = "configure" -a -z "${2}" ] || \
   [ "${1}" = "abort-remove" ]
then
    # "after install" here
    # "abort-remove" happens when the pre-removal script failed.
    #   In that case, this script, which should be idemptoent, is run
    #   to ensure a clean roll-back of the removal.
    after_install
elif [ "${1}" = "configure" -a -n "${2}" ]
then
    upgradeFromVersion="${2}"
    # "after upgrade" here
    # NOTE: This slot is also used when deb packages are removed,
    # but their config files aren't, but a newer version of the
    # package is installed later, called "Config-Files" state.
    # basically, that still looks a _lot_ like an upgrade to me.
    after_upgrade "${2}"
elif echo "${1}" | grep -E -q "(abort|fail)"
then
    echo "Failed to install before the post-installation script was run." >&2
    exit 1
fi
EOF

chmod 755 debian/post*
dpkg-buildpackage -uc -us
exit 0

Friday 28 September 2018

Building Nagios packages for pi

sudo su

nano /etc/apt/sources.list.d/debian-stretch.list

# Debian Stetch - New Zealand
deb http://ftp.nz.debian.org/debian/ stretch main contrib non-free
deb-src http://ftp.nz.debian.org/debian/ stretch main contrib non-free

nano /etc/apt/sources.list.d/eyemagnet-raspbian-stretch.list

# Eyemagnet [Stable] for Raspberry Pi
#deb https://eyemagnet.com/repo/raspbian-stretch stable/

nano /etc/apt/sources.list.d.off/raspi.list

deb http://archive.raspberrypi.org/debian/ stretch main ui
# Uncomment line below then 'apt-get update' to enable 'apt-get source'
deb-src http://archive.raspberrypi.org/debian/ stretch main ui

nano /etc/apt/sources.list.d.off/sources.list

deb http://raspbian.raspberrypi.org/raspbian/ stretch main contrib non-free rpi
# Uncomment line below then 'apt-get update' to enable 'apt-get source'
deb-src http://raspbian.raspberrypi.org/raspbian/ stretch main contrib non-free rpi

nano /home/gpgkey.sh

#!/bin/bash

apt-get update 2> /tmp/keymissing; for key in $(grep "NO_PUBKEY" /tmp/keymissing |sed "s/.*NO_PUBKEY //"); do echo -e "\nProcessing key: $key"; gpg --keyserver pgpkeys.mit.edu --recv $key && gpg --export --armor $key | apt-key add -; done
apt-get update 2> /tmp/keymissing; for key in $(grep "NO_PUBKEY" /tmp/keymissing |sed "s/.*NO_PUBKEY //"); do echo -e "\nProcessing key: $key"; gpg --keyserver subkeys.pgp.net --recv $key && gpg --export --armor $key | apt-key add -; done

chmod 700 /home/gpgkey.sh

/home/gpgkey.sh

apt update && apt upgrade -y

apt install dh-systemd libssl1.0-dev libwrap0-dev

apt-get install build-essential fakeroot dpkg-dev devscripts dh-make git

mkdir sourcebuild

cd sourcebuild

apt search nrpe

apt-get source nagios-nrpe

cd nagios-nrpe-3.0.1

su pi

dpkg-buildpackage -uc -us -rfakeroot

cd ..

ls -l *.deb

-rw-r--r-- 1 root root  28804 Sep 28 10:51 nagios-nrpe-plugin_3.0.1-3+deb9u1_armhf.deb
-rw-r--r-- 1 root root  52262 Sep 28 10:51 nagios-nrpe-plugin-dbgsym_3.0.1-3+deb9u1_armhf.deb
-rw-r--r-- 1 root root 345196 Sep 28 10:51 nagios-nrpe-server_3.0.1-3+deb9u1_armhf.deb
-rw-r--r-- 1 root root  71484 Sep 28 10:51 nagios-nrpe-server-dbgsym_3.0.1-3+deb9u1_armhf.deb

rm -fr nagios-nrpe-plugin-dbgsym_3.0.1-3+deb9u1_armhf.deb nagios-nrpe-server-dbgsym_3.0.1-3+deb9u1_armhf.deb

mkdir -p servernewpack serveroldpack/DEBIAN

dpkg-deb -x nagios-nrpe-server_3.0.1-3+deb9u1_armhf.deb serveroldpack/

dpkg-deb -e nagios-nrpe-server_3.0.1-3+deb9u1_armhf.deb serveroldpack/

nano serveroldpack/DEBIAN/control

Package: eyemagnet-monitoring-nagios
Source: nagios-nrpe
Version: 1.0

nano serveroldpack/etc/nagios/nrpe.cfg

(As per your liking)

dpkg-deb -Z xz -b serveroldpack/ servernewpack/

ls -l servernamepack/

-rw-r--r-- 1 root root 346420 Sep 27 14:42 eyemagnet-monitoring-nagios_1.0_armhf.deb

mkdir -p pluginnewpack pluginoldpack/DEBIAN

dpkg-deb -x nagios-nrpe-plugin_3.0.1-3+deb9u1_armhf.deb pluginoldpack/

dpkg-deb -e nagios-nrpe-plugin_3.0.1-3+deb9u1_armhf.deb pluginoldpack/

nano pluginoldpack/DEBIAN/control

Package: eyemagnet-monitoring-nagios-plugin
Source: nagios-nrpe
Version: 1.0

nano pluginoldpack/etc/nagios/nrpe.cfg

(As per your liking)

dpkg-deb -Z xz -b pluginoldpack/ pluginnewpack/

ls -l pluginnamepack/

-rw-r--r-- 1 root root 346420 Sep 27 14:42 eyemagnet-monitoring-nagios_1.0_armhf.deb

git clone ssh://username@urlofyourwebsite/git/repo.git

Move serveroldpack and pluginoldpack contents to approprite directory into the local repo clone.

Move newly built packages file to the approprite location in the local git repo

git add.

git commit -m "my push"

git push origin master

Job Complete!

Monday 24 September 2018

Ansible user with pre-existing keys

This is the main task file:

---
- name: Create a login user with group
  user:
   name: '{{item.name}}'
   groups: '{{item.group}}'
   append: yes
   state: present
  when: item.group is defined
  with_items: '{{sshusers}}'

- name: Create a login user w/o group
  user:
   name: '{{item.name}}'
   state: present
  when: item.group is not defined
  with_items: '{{sshusers}}'

- name: Setting sudo permissions
  lineinfile:
   path: /etc/sudoers
   state: present
   regexp: '^%sudo'
   line: '%sudo ALL=(ALL) NOPASSWD: ALL'
   backrefs: yes

- name: Blocking root password access
  lineinfile:
   path: /etc/ssh/sshd_config
   state: present
   regexp: '^PermitRootLogin'
   line: 'PermitRootLogin without-password'
   backrefs: yes
  notify: reload ssh

- name: Creates directory
  file:
   path: /home/{{item.name}}/.ssh
   state: directory
   owner: '{{item.name}}'
   group: '{{item.name}}'
   mode: 0700
   recurse: yes
  with_items: '{{sshusers}}'

- name: ensure file exists
  copy:
   content: ""
   dest: /home/{{item.name}}/.ssh/authorized_keys
   force: no
   group: '{{item.name}}'
   owner: '{{item.name}}'
   mode: 0600
  with_items: '{{sshusers}}'

- name: copy SSH keys
  authorized_key:
   user: '{{item.name}}'
   key: "{{item.key}}"
   state: present
   exclusive: yes
  when: item.key is defined
  with_items: '{{sshusers}}'


The var file will look like this:

sshusers:
  - name: em_naveed
    group: sudo
    key: ssh-rsa AAAABxxxxxxxxxxxxxxxxx in clear text
  - name: em_hugo
    key: ssh-rsa AAAABxxxxxxxxxxxxxxxx in clear text


The handler file will look like this:

---
- name: reload ssh
  service:
   name: ssh
   state: reloaded