Oct 7, 2018

Update for Xen 7.1+ - STOP: 0x0000007B BSOD After Restoring UrBackup Image to XenServer VM

A few months ago I posted about getting a STOP: 0x0000007B blue screen of death on one of my VMs after restoring an image backup from UrBackup in Xen 6.5. My solution then was to create the blank VM that we were restoring to using a Windows XP template.

Well, the other night I was migrating all of my old Xen 6.5 VMs to a new Xen 7.1 cluster, and that troublesome VM popped up again! I got another BSOD when I powered it up in the new cluster!



The trouble this time is that Xen 7.1 doesn't have a Windows XP template! Damn it!

No problem, I did find a solution. If you are getting this error for one of your VMs after moving, upgrading or restoring to Xen 7.1 or newer just use "Other install media" template located at the bottom of the templates list.


After using that template, and attaching the original disk it booted up just fine!

Sep 26, 2018

SQL Query to see how long DBCC CHECKDB will take

Last night while converting a VMWare VM to a XenServer VM I had a little bit of an issue with one of the database VMs, and several of the databases came up as "Suspect."

We decided to follow this procedure here (How to fix a Suspect Database) and it went fairly quick except on the biggest database that was almost 100GB in size.

Well, we wanted to know how long it would take for the DBCC CHECKDB to finish! I'm sure you are here because you are in the same position. Well, here is a query that will give you an estimated completion time so you have a rough estimate on how long it will take:

 SELECT session_id ,  
 request_id ,  
 percent_complete ,  
 estimated_completion_time ,  
 DATEADD(ms,estimated_completion_time,GETDATE()) AS EstimatedEndTime,   
 start_time ,  
 status ,  
 command   
 FROM sys.dm_exec_requests  
 WHERE database_id = <YOUR DATABASE ID NUMBER>  


Fairly simple right? Your output will look like this:


If you are wondering how to find your database_id you can find it by running this query:

 Use <DATABASE NAME>  
 Select DB_ID() AS [Database ID]  
 GO  

Again, fairly simple right? I hope this helped!

Sep 17, 2018

Getting Fog PXE boot working on a Thinkpad T460P, T470P and a T480P

I've been using Fog Project for years. It's my favorite open source operating system imaging tools for large networks. We were using it at my company up until a few years ago when we started buying Thinkpad T460P laptops and my desktop technician at the time couldn't get these laptops to boot. Instead of doing some actual Googling he and my Systems Administrator at the time wanted to use WDS instead.

Well both of those guys have since moved onto other places, and I decided that we were going to save a Windows server license and go back to Fog!

The first thing I had to do was figure out how to get the T460P's, T470P's and now T480P's to boot up to the Fog boot menu. When I first tried booting my T460P, this is the message I received:


Long story short, it got stuck saying No configuration methods succeeded.... Boo!

Well the fix was actually pretty easy. Instead of using the undionly.kpxe tftp file like the documentation says, we used intel.kpxe instead and it worked like a charm! Now we get the Fog boot menu on all models of our Lenovo laptops!

Have you had problems with Lenovo and Fog? What did you have to do to get it to work? Let us know in the comments!

Sep 10, 2018

Active Directory Users and Computers Will Not Open After Azure Site Recovery Test Failover

The other day we wanted to test some database stuff in our Production Azure environment. Obviously, we didn't want to mess with actual Production data, so since we're using Azure Site Recovery for our disaster recovery plan, we decided to initiate a test failover of the impacted systems in an isolated network.

Also, since we're using our own domain controller VMs, we had to fail those over for authentication. This is where I ran into problems. After initiating the test failover of my domain controllers I couldn't open Active Directory Users and Computers. When I tried, I got this message:
Naming information cannot be located because: The specified domain either does not exist or could not be contacted. Contact your system administrator to verify that your domain is properly configured and is currently online.


Well, after banging by head on the wall for a few hours, I finally found a solution. Open a registry editor and browse to:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters

Open the SysvolReady key. If the value of the key is 0 change it to 1. If the value is 1 change it to 0 and ‘Accept’, again change to 1 and accept. Exit registry editor.

Boom! After that I could open Active Directory Users and Computers again without a reboot!

One thing that still didn't work though was Netlogon and Group Policy. To fix that on my two domain controllers in the test environment I had to copy all contents from C:\Windows\SYSVOL\domain\NtFrs_PreExisting___See_EventLog on both domain controllers to C:\Windows\SYSVOL\domain\. When that was done I ran the following on both test domain controllers:

  • net stop netlogon
  • net start netlogon
After that, Netlogon and Group Policy were working again. I also took the extra steps seizing FSMO roles and deleting the other domain controllers from Active Directory Users and Computers, as well as Active Directory Sites and Services along with their sites. That way I wouldn't have to deal with replication issues in the isolated test environment.

Have you ever ran into something like this? Did you fix it differently? Let us know in the comments!

Aug 31, 2018

Alternative Download For HP Proliant SPP

Is it just me or should hardware manufacturers make their drivers easy to download regardless of support contracts? I've been a loyal HP server user for years, but just recently something really chapped my ass! I went to download the latest Service Pack for Proliant (SPP) so I could install drivers on an older Proliant system and couldn't! Why? Because I didn't have a current support contract with HP!

I've also been a loyal Lenovo user for years. Guess what? I can download their System Update tool fine! No need to have some bullshit login for it! In the past I could always download HP's SmartStart CD's without a login. Why now all of a sudden is there a change?

Now you're probably just saying, why not buy a support contract? Well, I already have full hardware support from our aftermarket re-seller Curvature at a fraction of the cost of HP's support. I don't feel the need to pay extra for roughly the same level support. The only draw back, at least for HP, is that I can't get tools like SPP!

Well, I found a good Samaritan that is making the downloads for the SPP available for free!  At the time of this writing the March and June 2018 versions of SPP are available here.

Hurry up and grab them before they are gone!

Aug 28, 2018

End of an Era: Coleman University is Out of Business



This is a real shame. I myself am a Coleman Alumnus. I just heard the news while interviewing someone for my company's open Systems Administrator position in San Diego.

Via Fox 5:
Coleman University -- a private college that's operated in San Diego since 1963 -- is closing at the end of the current term, school leadership announced Thursday. 
"To all our very fine students, staff, and faculty, I am personally sorry that we have to close Coleman University," President & CEO Norbert J. Kubilus said. 
In a letter to students, faculty and staff obtained by FOX 5, Kubilus said that Coleman learned in late June that they had lost a bid for accreditation from the Western Association of Colleges and Universities Senior College and University Commission, putting the school in a financial bind.
Continue Reading


Aug 27, 2018

Shadow Admins: What Are They and How Can You Defeat Them?

Managing something you don’t even know exists in your network is always a challenge. This is why the problem of stealthy or shadow admins needs to be acknowledged by security officers. after all, it only takes compromising a single account with elevated privileges to put the security of an entire company in jeopardy.

So, who are these shadow admins and what strategies may help you combat the threats they pose? Keep on reading to find answers to these questions.

Shadow admins: what are they?

When talking about the shadow or stealthy admins, we are referring to the accounts that were delegated admin-level privileges in Active Directory, usually with a direct permission assignment. This is why these shadow admins can also be called delegated admins.

In general, there are four main groups of privileged accounts:

  • Domain admins
  • Local admins
  • Application/services admins
  • Business privileged accounts

Any of these categories may have both legitimate and shadow administrative accounts. However, while legitimate privileged accounts are easy to identify, stealthy admins are not members of any of the default administrative groups in Active Directory and, therefore, can’t be found that easily. As a result, many organizations simply don’t take delegated admins into account when looking for privileged users in Active Directory.

Ignoring delegated admins is not an option though. These accounts can possibly have unrestricted control over legitimate Active Directory admins and be able to:

  • change passwords for privileged accounts
  • change permissions on the existing admin groups or accounts
  • add new accounts to the existing administrative groups
  • create new admin groups in Active Directory, and so on.

Therefore, a successful attack on just one delegated admin account can have consequences just as devastating as when a legitimate privileged account was compromised.

Let’s take a closer look at the main risks posed by shadow admins.

Top risks posed by unmanaged admin accounts


The presence of stealthy administrators in your network creates a variety of problems, including:

  • Cybersecurity risks
  • Financial risks

Unmanaged privileged accounts are like a Christmas gift for the attackers. Since they are often not taken into account by an organization’s cybersecurity policy, they can be easier to compromise while still providing the attackers with unrestricted access to your company’s critical data.

With the increased risks of data leakage, the presence of shadow admins in the network creates additional financial risks for the company. Not to mention that the news about the loss of valuable, sensitive data can cause severe damage to the company’s reputation.

In April 2017, for instance, Oracle’s Solaris operating platform was targeted by hackers using shadow admins to get into the system. In particular, there were two malicious programs discovered (EXTREMEPARR and EBBISLAND) that were able to elevate the rights of existing users to the administrative level. Thus, they turned regular users into shadow admins with remote root access to platform networks.

The only way to mitigate risks posed by such accounts is by identifying all shadow admins within your network and managing them effectively. In the next section, we talk about the ways you can find and manage all administrative accounts in your company’s network.

Best practices for detecting and managing shadow admins

As of today, there are two ways you can detect delegated admins in your network and mitigate the risks they pose:

  • By analyzing Access Control Lists (ACLs) on Active Directory
  • By building an effective privileged access management strategy

ACLs analysis. When trying to identify all of the privileged accounts present in your company’s network, look for the tools that scan ACLs and analyze effective permissions rather than an account’s presence in a particular Active Directory group. Thus, you’ll be able to find even the accounts that were delegated additional privileges without being added to any of the admin groups on Active Directory.

Once identified, make sure that only legitimate administrators (such as members of Domain Admin groups) are granted such critical privileges as Replicating Directory Changes All, Reset Password, or Full Control.

Privileged access management. Building a well thought out privileged access management strategy can also help you solve the problem of stealthy admins. Your cybersecurity strategy should include two measures:

  • Continuous monitoring and audit of the network
  • Effective management of privileged access to critical data and assets


Audit and monitoring are important for several reasons. First and foremost, it ensures a better level of visibility within the network: you gain the knowledge about who can access what. Secondly, all information gathered at this stage is essential for investigating security incidents should any of them take place in your organization.

When monitoring your network, pay special attention to the following factors:

  • What accounts have elevated privileges and can access your company’s critical assets (who can access particular servers or domains, who can work with your company’s sensitive information)
  • What privileged accounts and elevated permissions were added just recently (to identify a possible attack in progress)
  • If there’re any suspicious activities (a sudden use of a “dead” privileged account, an admin logging in from an unusual IP address, and so on)


Ensuring an appropriate level of privileged access management is the second step in building an efficient cybersecurity strategy and combating shadow admins. Once you know who can access your company’s valuable data, you can take necessary measures to either secure or dismiss these accounts. Consider implementing the least-privilege approach for all privileged accounts and assigning any elevated permissions only on an “as needed” basis.

When looking for an efficient solution to these problems, turn your attention to Ekran System. It’s a universal platform for monitoring, auditing, and managing both regular and privileged users. This platforms gives you a full visibility into your network and allows taking proactive measures for preventing privilege misuse at any level.

Conclusion

Delegated or shadow administrative accounts can pose a serious threat to an organization’s cybersecurity when remaining undiscovered. However, identifying stealthy admins isn’t enough – you need to manage them effectively in order to mitigate any cybersecurity and financial risks they can pose. While ACLs scanning works well for discovering accounts with elevated permissions, the only way you can effectively manage and secure these accounts is by implementing an appropriate level of Privileged Access Management.

Aug 24, 2018

Sandbox-Evading Malware Are Coming: 7 Most Recent Attacks

Nowadays, anti-malware applications widely use sandbox technology for detecting and preventing viruses. Unfortunately, criminals are developing new malware that can evade this technology. If such malware detects the signs of VM environment, it remains inactive until they are outside of the sandbox. Experts predicted that in 2018 we would see an increasing number of cyber attacks performed with sandbox-evading. However, the epidemic has actually started two years ago. Let's look at the most recent attacks that were successful because modern security solutions weren't able to detect sandbox-evading malware.

1. Grobios

Since early March 2018, there have been cases of attacks performed with the RIG Exploit Kit that infects victims with a backdoor trojan called Grobios. This malware is packed with PECompact 2.xx that allows it to evade static detection. Though the unpacked file has no functions, it uses hashing to obfuscate the names of API functions it invokes. It also divides the PE header of the DLL files to match the name of a function to its hash. In addition, the trojan performs a series of checks to become aware of its environment. Particularly, it looks for virtual machine software, like Hyper-V or VMWare, a username with the words "malware", "sandbox", or "maltest", and compares the driver names with its blacklist of VM drivers.

2. GootKit

This banking trojan attacks users mainly in Europe through spam sent via MailChimp since 2017. It steals the credentials of bank’s customers and manipulates their online sessions. Before installation, the malware uses a dropper to become aware of its environment. Thus, the dropper looks for specific names in the Windows Registry and virtual machine resources on disk. It also checks the device’s BIOS to discover whether there is a virtual machine client installation and examines the machine’s MAC address. If the dropper doesn't find any signs of the sandbox, the virus payload is executed and GootKit trojan carries out additional checks, like looking for hard drives, CPU names that confirm a physical machine, and virtual machine values.

3. ZeuS Panda

This is another banking trojan that uses environment-aware techniques to skip the sandbox. Its main goal is stealing user’s banking credentials and account numbers by implementing “man in the browser” attack. In order to infect a targeted computer, it changes the browser security settings and alarms. After loading, the trojan checks for indicators of the sandbox environment, like the presence of Sandboxie, ProcMon, SoftICE debugger, and other tools. In 2018, ZeuS Panda targeted banks in Japan, Latin America, the United States, as well as popular websites like YouTube, Facebook, and Amazon.

4. Heodo

Heodo is a banking trojan that was first detected in 2016 and subsequently was used in a 2017 attack against the US bank clients. This malware infects victims through invoice emails from a known contact that contains an attached PDF file. After a user clicks on the attachment, the trojan is loaded. It uses a technology known as a crypter that allows the malware to hide from the sandbox environment. Heodo imbeds itself within the software that is already installed on the infected computes and makes mutated copies of itself on the infected system.

5. QakBot Trojan

A massive attack with the QakBot Trojan was detected in 2017 when the malware caused the lockouts of Active Directory users from their company's domain by stealing user credentials. This malware infects victims with a dropper that uses delayed execution to evade the sandbox. It loads to the targeted computer and waits for 10 to 15 minutes before its execution. While antivirus sandboxes analyze newly loaded files for a short period of time, the dropper remains undetected.

6. Kovter

This trojan was initially developed as a police ransomware, but in 2017 it was detected as a fileless malware that can easily bypass the sandbox detection. It infects victims via a malspam email with an attachment that contains macros for Microsoft Office files or a .zip attachment that contains infected JavaScript files. By using the Windows registry, Kovter leaves the sandbox undetected. Victims are requested to pay a $1,500 ransom in Bitcoin.

7. Locky

Locky is a classic example of environment aware malware that was released in 2016. It was spread during an email campaign that contained an infected Microsoft Word document. The document had a malicious macros that saved and run a binary file that downloads the encryption trojan. This malware easily bypasses the sandbox, as the virus execution begins with a user interaction, such as starting the macros, but the VM environment doesn't perform any interactions with the infected document.
How to withstand sandbox-evading malware
As you can see, hackers are applying different sandbox evasion techniques to make their viruses undetectable in the sandbox. After infecting the victimized computer, this malware tries to understand its environment by doing the following:

  • looking for signs of virtual machine (ZeuS Panda)
  • detecting system files (GootKit)
  • waiting for user interactions (Locky, Kovter, Heodo)
  • beginning its execution in a specified time (QakBot Trojan)
  • obfuscating the system data (Grobios)

Sandbox technology is unable to detect environment-aware viruses and let them harm your computer. Thus, developers of security software should pay their attention to more progressive approaches of malware detection that are based on a customized sandbox environment, behavior analysis, machine learning, and others.

Conclusion

Sandbox-evading viruses are a new type of modern malware that can't be detected by traditional antivirus solutions. Computer users are now at a high risk to become a victim of cyber criminals as this malware is rapidly spreading across the Web. While users should follow the best cybersecurity practices, software developers should hurry up with the implementation of the latest technologies to improve their anti-malware solutions.

Jul 27, 2018

The Microsoft License Verification Process Scam

Oh man, oh man do I hate Microsoft! Not the software so much, I mean they do actually put out really good products. What I hate is their licensing rules, and how they make it so damned convoluted and confusing! On top of that, right after you've worked with your Microsoft Licensing re-seller to button up your licenses, you may periodically get contacted to participate in the Microsoft License Verification Process! Weeeeee!

I'm not sure what happened, but about two years ago was my first experience with this. We complied, and Microsoft came back and said we were out of compliance based on random changes they had made to their licensing since our last true-up with our re-seller, and we had to fork over about $30,000 that we didn't budget for to become compliant again.

To be fair, our previous re-sellers did give us some bad information about licenses, so after that audit we switched re-sellers.

Well, I just got picked again this year. In the 13 years I've worked in Information Technology, these last two years were the first time I'd ever seen this... And now I think I know why. It's basically a shady marketing tool!

I reached out to our new re-seller about this so called audit, and here is what they said:
We’ve run into this a lot recently and over the years. Their wording seems to hide the fact that you don’t have to do this. 
The emails starting with “v-“ are not Microsoft and they are not audits. They are voluntary, but the results are shared with Microsoft at which point you would be required to reconcile anything they find.  
If you want to do an engagement like this to assess your licensing, we can do it for you. We don’t share the results with Microsoft and just deliver them to you.
In their frequently asked questions, the people contacting me about this Microsoft Verification Process say this:


I asked my rep about that too and they said:
Man I don’t like that wording. “us” 
That v- in the email means that person doesn’t work for Microsoft, but is contracted. Microsoft allows this to happen, but it’s not really their employees. I see these all the time and we just ignore them unless you would like to do an engagement. 
Microsoft does audit occasionally, but this email is pretty threatening. Microsoft audits don’t come in email form, I’m 99% sure.
So long story short, if you are contacted about participating in a Microsoft License Verification and the people contacting you have a "v-" before their email address, you should ignore them and reach out to your re-seller instead. It's really just a ploy so Microsoft can increase their bottom line before your annual true-up!

Have you experienced one of these? Did you comply? Is my rep wrong? Let us know your story in the comments!


Jun 29, 2018

I've switched to Let's Encrypt for TLS encryption on my personal email server

Years ago I started using iRedmail for my personal email. I love it, and it's super easy to setup. Way back then I purchased a three year Comodo SSL certificate for it. Well that certificate expired, and it looks like none of the affordable SSL companies are offering three year certificates anymore... Bummer.

Oh, well. I figured why waste the money anyway when I could just get a free certificate from Let's Encrypt! The only issue I have with Let's Encrypt is that they only issue three month certificates. Apparently they think it's more secure that way. Here are the reasons they give from their blog:

  • They limit damage from key compromise and mis-issuance. Stolen keys and mis-issued certificates are valid for a shorter period of time.
  • They encourage automation, which is absolutely essential for ease-of-use. If we’re going to move the entire Web to HTTPS, we can’t continue to expect system administrators to manually handle renewals. Once issuance and renewal are automated, shorter lifetimes won’t be any less convenient than longer ones.

Well, they are right about one thing, the automated renewal process is pretty convenient. The only issue I had with it was that they recommend using Certbot for Linux based servers. When I followed this post (How To Secure Nginx with Let's Encrypt on Ubuntu 16.04) on how to install it, I got a bunch of errors and jacked up my Ubuntu based iRedmail server... (Thank God for backups!)

Anyway, there are much easier scripts and utilities around that can basically do the same thing. I opted for acme.sh! From their page:
  • An ACME protocol client written purely in Shell (Unix shell) language.
  • Full ACME protocol implementation.
  • Support ACME v1 and ACME v2
  • Support ACME v2 wildcard certs
  • Simple, powerful and very easy to use. You only need 3 minutes to learn it.
  • Bash, dash and sh compatible.
  • Simplest shell script for Let's Encrypt free certificate client.
  • Purely written in Shell with no dependencies on python or the official Let's Encrypt client.
  • Just one script to issue, renew and install your certificates automatically.
  • DOES NOT require root/sudoer access.
  • Docker friendly
  • IPv6 support
  • It's probably the easiest & smartest shell script to automatically issue & renew the free certificates from Let's Encrypt.
Installation was easy, and so was requesting my first certificate. A part of the install process is that it creates a cron job to automatically renew your certificates. The one modification I had to do was to create a script with the following to copy the new certs from the default location in the installer user's home directory to the directory where I keep my certificates and keys:

 #!/bin/bash  
 cd ~/.acme.sh/domainname.com/  
 yes | cp -rf *.cer /pathto/ssl/certs/  
 yes | cp -rf *.key /pathto/ssl/private/  
 service apache2 restart  
 service dovecot restart  
 service postfix restart  

After that, I created a cron job to run that script nightly since their renewal script runs twice a day. Boom, done! Now I shouldn't have to worry about SSL certificates on this server for a very long time, or until I built my next one.

Do you use Let's Encrypt on your servers? Do you like it? Why or why not? Let us know in the comments!

Jun 14, 2018

Script To Configure Your Azure Application Gateway For TLS 1.2 Only

If you are just reading this post, you are cutting things pretty close with PCI/DSS compliance! After all, you have until the end of the month to remove older versions of TLS to remain PCI compliant.

Well, if you are using Application Gateways in Azure to secure your web servers, you're in luck, because setting a custom SSL policy is pretty easy. You just have to do it via PowerShell.

Now, this script assumes you've already created your Application Gateway. If you are trying to configure one from scratch, you'll have to keep Googling my friend... Sorry.

Before you can run your script, you must first connect to Azure via PowerShell, and select your subscription.

  • Connect-AzureRmAccount
  • Select-AzureRmsubscription -SubscriptionName "<Subscription name>"

After that, you can copy and paste the below script to set your custom SSL policy. Be sure to replace the Application Gateway Name and the Resource Group Name to match your environment.

Here's the script:

 # get an application gateway resource  
 $gw= Get-AzureRmApplicationGateway -Name <Application Gateway Name> -ResourceGroup <Resource Group Name>  
 # set the SSL policy on the application gateway  
 Set-AzureRmApplicationGatewaySslPolicy -ApplicationGateway $gw -PolicyType Custom -MinProtocolVersion TLSv1_2 -CipherSuite "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA", "TLS_RSA_WITH_AES_128_GCM_SHA256", "TLS_RSA_WITH_AES_256_CBC_SHA256", "TLS_RSA_WITH_AES_128_CBC_SHA256"  
 # validate the SSL policy locally  
 Get-AzureRmApplicationGatewaySslPolicy -ApplicationGateway $gw  
 # update the gateway with validated SSL policy  
 Set-AzureRmApplicationGateway -ApplicationGateway $gw  

After that, your Application Gateway will only support TLS 1.2, and will use the following ciphers in order:
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
TLS_RSA_WITH_AES_128_GCM_SHA256
TLS_RSA_WITH_AES_256_CBC_SHA256
TLS_RSA_WITH_AES_128_CBC_SHA256
Pretty easy right? Did this help you out? Let us know in the comments!

May 24, 2018

A faster and easier way to make LUN files for your SCST SAN

I've been writing a lot lately about SCST iSCSI SANs again. It's been a few years since I've had a chance to configure one of these from scratch, and a lot has changed since 2012 when I first started using these.

In the past I've always used dd to create LUN files for use with SCST. For thin provisioned LUNs I would run something like the following:
sudo dd if=/dev/zero of=lun1 bs=1 count=0 seek=1T
For thick provisioned LUNs I would run this instead:
 sudo sudo dd if=/dev/zero of=lun1 bs=1024 count=1T seek=1T
Well, I found two utilities that do the same thing, but they are way faster and the syntax is way easier! One is called fallocate and the other is called truncate!

To create a thick provisioned LUN, you would use fallocate to create your file by running:
sudo fallocate -l 1T lun1
To create a thin provisioned LUN, you would use truncate to create your file by running:
truncate -s 1T lun1
So simple right? Why am I just learning about this now!?

May 23, 2018

How to specify a thin provisioned LUN in SCST

The other day I wrote about how to install SCST 3.4.0 on Ubuntu 18.04. If you are not familiar with SCST, it is basically a SAN target software that you can run on Linux so you can build your own low cost SAN storage. I've been using it for years, and just recently I've started to learn a few new things about it.

For instance, I used to think that for thin provisioning, all you had to do was to create a thin provisioned disk file to present as a LUN. To do that you just run the following:
sudo dd if=/dev/zero of=lun1 bs=1 count=0 seek=1T
The above creates a thinly provisioned 1TB LUN file called lun1. Simple right?

Well, this is great and all, but if you want to use features like TRIM or UNMAP to reclaim disk space, you also need to tell SCST that this LUN file is a thin provisioned LUN. To do that, you need to add the thin_provisioned parameter to the device section of your /etc/scst.conf file. See below for an example:

 HANDLER vdisk_fileio {  
     DEVICE lun1 {  
         filename /data/lun1  
         nv_cache 1  
         thin_provisioned 1  
     }  
 }  
 TARGET_DRIVER iscsi {  
     enabled 1  
     TARGET iqn.2018-05.bauer-power.net:iscsi.lun1 {  
         enabled 1  
         rel_tgt_id 1  
         GROUP VMWARE {  
             LUN 0 lun1  
             INITIATOR iqn.2018-05.com.vmware1:8bfdfcd0  
         }  
     }         
 }  

After making this change, you can either restart the scst daemon, or reboot your SAN. If you can't reboot the SAN you will have to actually remove the LUN on the fly to make this change. To do that you have to do the following:
  • sudo scstadmin -rem_lun 0 -driver iscsi -target iqn.2018-05.bauer-power.net:iscsi.lun1 -group VMWARE
  • sudo scstadmin -close_dev lun1 -handler vdisk_fileio
  • sudo scstadmin -open_dev lun1 -handler vdisk_fileio -attributes filename=/data/lun1 thin_provisioned=1 
  • sudo scstadmin -add_lun 0 -driver iscsi -target iqn.2018-05.bauer-power.net:iscsi.lun1 -group VMWARE -device lun1
Obviously, you need to change the lun names, file names and target names to match your environment. Special thanks to Gilbert Standen from the Scst-devel mailing list for the above steps on making this change on the fly! Check out his blog here: (brandydandyoracle)

There are a lot of parameters you can add to your config file as well. Here's a list from SCST's Source Forge page:

  - filename - contains path and file name of the backend file.  
  - blocksize - contains block size used by this virtual device.  
  - write_through - contains status of write back caching of this virtual  
   device.  
  - read_only - contains read only status of this virtual device.  
  - o_direct - contains O_DIRECT status of this virtual device.  
  - nv_cache - contains NV_CACHE status of this virtual device.  
  - thin_provisioned - contains thin provisioning status of this virtual  
   device.  
  - removable - contains removable status of this virtual device.  
  - rotational - contains rotational status of this virtual device.  
  - size_mb - contains size of this virtual device in MB.  
  - t10_dev_id - contains and allows to set T10 vendor specific  
   identifier for Device Identification VPD page (0x83) of INQUIRY data.  
   By default VDISK handler always generates t10_dev_id for every new  
   created device at creation time based on the device name and  
   scst_vdisk_ID scst_vdisk.ko module parameter (see below).  
  - usn - contains the virtual device's serial number of INQUIRY data. It  
   is created at the device creation time based on the device name and  
   scst_vdisk_ID scst_vdisk.ko module parameter (see below).  
  - type - contains SCSI type of this virtual device.  
  - resync_size - write only attribute, which makes vdisk_fileio to  
   rescan size of the backend file. It is useful if you changed it, for  
   instance, if you resized it.  

Pretty cool right? Let us know what you think in the comments!

May 18, 2018

How to install SCST 3.4.0 in Ubuntu 18.04

Well crap. The other day I talked about how I re-configured one of my Bauer-Power iSCSI SANs using tgt. It was an easy setup, but once I started using it I noticed that tgt performed like shit. CPU's were spiking like crazy on the SAN itself, and when I was backing stuff up I couldn't access the drive on the backup server. It would get completely unresponsive!

I decided I had to go back to SCST. Luckily installing it is way easier than it used to be. To install version 3.4.0 now just do the following:
  • Create an empty working directory
 rm -rf ~/scst-build  
 mkdir ~/scst-build  
 cd ~/scst-build  
  • Install dependencies
 sudo apt install git devscripts equivs dkms 
 git clone -b ubuntu-3.4.x https://github.com/ubuntu-pkg/scst.git  
 cd scst  
 sudo mk-build-deps -i -r  
  • Build the package
 dpkg-buildpackage -b -uc  
  • Pre-install, create two directories (For some reason the deb packages don't do it...)
 sudo mkdir -p /var/lib/scst/pr  
 sudo mkdir -p /var/lib/scst/vdev_mode_pages  
  • Install
 sudo dpkg -i ../scst-dkms_*deb  
 sudo dpkg -i ../iscsi-scst_*.deb  
 sudo dpkg -i ../scstadmin_*deb  

Now you just have to configure your LUN using the instructions in my tgt post, and configure your /etc/scst.conf file using my old SCST post. Once those are done restart the scst service.

 sudo service scst restart  

Of course, if you don't want to mess with all of the above stuff, you could just download my pre-packaged scst 3.4.0 deb files for Ubuntu 18.04 and run my install script...

 cd ~  
 wget https://mail.bauer-power.net/drop/scst/scst-3.4.0-Ubuntu.tgz  
 tar -xzvf scst-3.4.0-Ubuntu.tgz  
 cd scst*  
 sudo chmod +x install.sh  
 sudo ./install.sh  

Now just setup your LUN files, create your /etc/scst.conf file and run the following commands:

  • modprobe scst 
  • modprobe scst_vdisk 
  • modprobe iscsi-scst 
  • iscsi-scstd 
  • scstadmin -set_drv_attr iscsi -attributes enabled=1 
  • scstadmin -config /etc/scst.conf
  • update-rc.d scst defaults
  • /etc/init.d/scst restart

Boom! Now you are off to the races!

May 17, 2018

How to Re-IP An OSSEC Agent

At my day job we use OSSEC for host based intrusion detection. It works great! It does all sorts of things from verifying registry integrity, checking files for changes, reading security logs etc., and sends email alerts for anything out of the ordinary.

Well, we're in the process of migrating servers from on-premise to Azure, so that means that some of our servers are getting new IP addresses. Googling around, I didn't find a good way to re-IP the agents except to remove them, and re-add them. I didn't want to do that.

It turns out, there is an easier way. All you have to do is edit /var/ossec/etc/client.keys with your favorite text editor and modify the IP address of the client you want to change. If you don't want to deal with this in the future, you can replace the IP address with 'any' so that OSSEC will accept connections from that client as long as the hostname and the client key match.

After you make your change, restart the OSSEC daemon on your OSSEC server:
sudo service ossec restart
Re-run /var/ossec/bin/manage_agents and extract the key again for the agent you want to update. Then on the client, open OSSEC Agent Manager as an administrator, click Manage > Stop OSSEC, re-paste the key, click Save, then restart OSSEC by clicking Manage > Start OSSEC.

Boom! Done! You should now be able to connect using the new IP address or 'any'.

May 16, 2018

Bauer-Power SAN 3.0

NOTE: Please read my post about installing SCST on Ubuntu 18.04 first...

Many moons ago I wrote about how to configure an Ubuntu Linux based iSCSI SAN. The first iteration used iSCSITarget as the iSCSI solution. The problem with that is that it didn't support SCSI-3 Persistent Reservations. That means it wouldn't work for Windows failover clustering, and you would probably see issues if you were trying to use it in VMWare, XenServer or Hyper-V.

The second iteration used SCST as the iSCSI solution, and that did work pretty well, but you had to compile it from source and the config file was kind of a pain in the ass. Still though, it did support SCSI-3 Persistent Reservations, and was VMWare ready. It's the solution I've been using sing 2012 and it's worked out pretty well.

Well the other day I decided to rebuild one of the original units I setup from scratch. The first two units I did this setup on were SuperMicro SC826TQ's with 4 NICs, 2 quad core CPUs and 4GB of RAM, 3Ware 9750-4i RAID Controller, and twelve 2TB SATA Drives. This sucker gave me about 18TB of usable backup storage after I configured the 12 disks in RAID 6.

This time I used Ubuntu 18.04 server because unlike the first time I did this, the latest versions of Ubuntu have native drivers for 3Ware controllers. On top of that, the latest versions of Ubuntu have the iSCSI software I wanted to use in the repositories... More on that later.

I partitioned my disk as follows:

Device Mount Point Format Size
/dev/sda1 N/A bios/boot 1MB
/dev/sda2 / ext4 10GB
/dev/sda3 N/A swap 4GB
/dev/sda4 /data xfs 18TB

After Ubuntu was installed I needed to setup my network team. Ubuntu 18.04 uses Netplan for network configuration now, which means that NIC bonding or teaming is built in. In order to setup bonding or teaming you just need to modify your /etc/netplan/50-cloud-init.yaml file. Here is an example of how I setup my file to team the four NICs I had, as well as use MTU 9000 for jumbo frames:


network:
    version: 2
    ethernets:
        enp6s0:
            dhcp4: no
            dhcp6: no
            mtu: 9000
        enp7s0:
            dhcp4: no
            dhcp6: no
            mtu: 9000
        enp1s0f0:
            dhcp4: no
            dhcp6: no
            mtu: 9000
        enp1s0f1:
            dhcp4: no
            dhcp6: no
            mtu: 9000
    bonds:
        bond0:
            interfaces: [enp6s0, enp7s0, enp1s0f0, enp1s0f1]
            mtu: 9000
            addresses: [100.100.10.15/24]
            gateway4: 100.100.10.1
            parameters:
                mode: balance-rr
            nameservers:
                addresses: [8.8.8.8, 8.8.4.4]


It's important to note that Netplan is picky about indentation. You must have everything properly indented or you will get errors. If you copy the above config, and modify it for your server, you should be fine though.

After setting up my bonded network, I installed my software. I opted to use tgt this time. If you are unfamiliar with it, it's apparently a re-write of iscsitarget, but it supports SCSI-3 Persistent Reservations. I tested it myself using a Windows Failover Cluster Validation test:



Boom! We're in business!

To install tgt simply run the following:
sudo apt-get install tgt
 After installing you will want to create a LUN file in /data. To create a thin provisioned disk run the following:
sudo dd if=/dev/zero of=/data/lun1 bs=1 count=0 seek=1T
This creates a 1TB thinly provisioned file in /data called lun1 that you can present to iSCSI initiators as a disk. If you want to create a thick provisioned disk simply run:
sudo dd if=/dev/zero of=/data/lun1 bs=1024 count=1T seek=1T
Once you have your LUN file, you will want to create a config file for your LUN. You can create separate config files for each LUN you want to make in /etc/tgt/conf.d. Just append .conf at the end of the file name and tgt will see it when the service restarts. For our purposes, I created one called lun1.conf and added the following:

<target iqn.2018-05.bauer-power.net:iscsi.lun1>
        backing-store /data/lun1
        write-cache off
        vendor_id www.Bauer-Power.net
        initiator-address 100.100.10.148
</target>


The above creates an iSCSI target and restricts access to it to only 100.100.10.148. You can also use initiator-name to restrict access to particular iSCSI initiators, or you can use incominguser to specify chap authentication. You can also use a combination of all three if you want. Restricting by IP works for me though.

I also opted to disable write-cache because with it enabled I noticed that tgt was pegging my RAM. On top of that, my RAID controller handles write-cache on it's own, so it actually helped my performance to disable it.

All of this being said, you can find lots of configuration options here: (tgt Config Options)

After you have your file created, all you have to do is restart the tgt daemon and you're ready to serve up your iSCSI LUN!
sudo service tgt restart
After you restart, you can see your active LUNs by running:
sudo tgtadm --op show --mode target
You can also create LUNs on the fly without restarting tgt. This is handy if you need to add a LUN and you don't want to mess up connections to LUNs you've already created. To do that, create your LUN file like you did before. Obviously, name it something new like lun2.

Next,make sure to note what LUNs you already have running by running this command:
sudo tgtadm --op show --mode target
Target 1 = tid 1, Target 2 = tid 2 and so on and so forth. If you only have one target, then your next target will be tid 2. Assuming that, and assuming your new LUN file is called lun2 you would run:

sudo tgtadm --lld iscsi --op new --mode target --tid 2 -T iqn.2018-05.bauer-power.net:iscsi.lun2
sudo tgtadm --lld iscsi --op new --mode logicalunit --tid 2 --lun 1 -b /data/lun2
sudo tgtadm --lld iscsi --op bind --mode target --tid 2 --initiator-address 100.100.10.148

This will create a target, and will be available only to 100.100.10.148. If you wanted to allow other IP's re-run that last line for each IP address you want to allow.

Now if you want to have this LUN persist after a reboot, you can either manually create a conf file in /etc/tgt/conf.d/ or you can run the following to automatically create one for you:
tgt-admin --dump | sudo tee /etc/tgt/conf.d/lun2.conf
The only issue with the above is that it dumps all running target information in you new file. You will have to go in there and remove the other targets. In this case, it's just better to manually create the config file... but that's just me. Also, that is not a typo... tgt-admin is a different tool than tgtadm... Weird right?

Anyway, this setup is way easier than SCST ever was. I'm looking forward to replacing all of my SCST SANs with tgt in the upcoming months.

It's important to note that using the above hardware is not going to give you high performance. It's suitable for backup storage, and that's about it. If you want to run VMs or databases, I'd recommend getting 10GBe switches for use in iSCSI. You can get one fairly cheap here (10GBe switches). If you get 10GB switches, you will need a 10GB NIC as well. You can get one here (10GB NICs). Finally you will need faster disks. You can get 15K RPM SAS disks here (15K RPM SAS).

What do you think about this setup? Are you going to try it out? Let us know in the comments!



May 10, 2018

Script to Clone Azure Network Security Groups (NSGs) in PowerShell

This script is a life saver! I am working on a big project to migrate to Azure, and one of the tedious parts of the project is setting up Network Security Groups, or NSGs. My company uses many granular rules, so setting these up the first time is time consuming. The idea of manually setting them up in other regions is down right daunting!

Well, not anymore! I found this series of commands from Virtual Geek that lets you do it easily in PowerShell!

First, you need the Azure PowerShell module if you don't already have it. After that, run the following:
$TemplateNSGRules =  Get-AzureRmNetworkSecurityGroup -Name '<Original NSG>' -ResourceGroupName '<Resource Group of Original NSG>' | Get-AzureRmNetworkSecurityRuleConfig
This creates a variable called TemplateNSGRules that we will use in step three. Next create your new NSG by running the following:

$NSG = New-AzureRmNetworkSecurityGroup -ResourceGroupName '<Destination Resource Group>' -Location '<Region Where You Want The New NSG>' -Name '<Name of New NSG>'
If you have already created an NSG in the portal, you would use this instead:

 $NSG = Get-AzureRmNetworkSecurityGroup -Name '<Name of New NSG>' -ResourceGroupName '<Destination Resource Group>'
Once you have executed one of the previous two commands, you will have a new variable called NSG that we will run a foreach loop on to import our rules from the original NSG:

 foreach ($rule in $TemplateNSGRules) {
    $NSG | Add-AzureRmNetworkSecurityRuleConfig -Name $rule.Name -Direction $rule.Direction -Priority $rule.Priority -Access $rule.Access -SourceAddressPrefix $rule.SourceAddressPrefix -SourcePortRange $rule.SourcePortRange -DestinationAddressPrefix $rule.DestinationAddressPrefix -DestinationPortRange $rule.DestinationPortRange -Protocol $rule.Protocol # -Description $rule.Description
    $NSG | Set-AzureRmNetworkSecurityGroup
}
Boom! That's it! Now you have an exact clone of your original NSG in just a few minutes! Make sure you replace the items I used in < > to fit your environment!

Did this help you out? Let us know in the comments!

May 7, 2018

Like apt-get for Windows! Meet Chocolatey!

I'm surprised I haven't written about this already. I've known about it for several years now, so I thought I would have wrote about it before now... I guess I was wrong.

Anyway, I started thinking about Chocolatey again today when I was asked to come up with a way to easily handle third party application patches. There are tools out there that do it, but Chocolatey is free and it works pretty much the same way that apt-get does in Ubuntu. That means, you can script it and automate it!

If you are unfamiliar with Chocolatey, this is a description from their page:
Chocolatey is a package manager for Windows (like apt-get or yum but for Windows). It was designed to be a decentralized framework for quickly installing applications and tools that you need. It is built on the NuGet infrastructure currently using PowerShell as its focus for delivering packages from the distros to your door, err computer. 
Chocolatey is a single, unified interface designed to easily work with all aspects of managing Windows software (installers, zip archives, runtime binaries, internal and 3rd party software) using a packaging framework that understands both versioning and dependency requirements. Chocolatey packages encapsulate everything required to manage a particular piece of software into one deployment artifact by wrapping installers, executables, zips, and scripts into a compiled package file. Chocolatey packages can be used independently, but also integrate with configuration managers like SCCM, Puppet, and Chef. Chocolatey is trusted by businesses all over the world to manage their software deployments on Windows. You’ve never had so much fun managing software!
If you want to use it for 3rd party software updates, you can install Chocolatey, then just run a scheduled task that runs the following command:
C:\choco update all -y
It's important to note that Chocolatey will only update software that you've installed with Chocolatey. So if you already have Adobe Reader, Java, Flash etc. You will first need to run the install commands for these applications with Chocolatey before you can start getting updates. You don't have to uninstall and re-install though which is nice.

For instance, I already had 7zip installed, but now I want to make sure I get updates for it with Chocolatey, so I ran the following to install the latest version of 7zip:
C:\choco install 7zip -y
You can find a full list of their packages here:  https://chocolatey.org/packages

What do you use to keep your third party software up to date? Let us know in the comments!

May 4, 2018

STOP: 0x0000007B BSOD After Restoring UrBackup Image to XenServer VM

Sorry I haven't been writing very much lately. I've been completely slammed at my day job. I'm juggling many different projects, trying to chase down consultants, putting out fires, training new hires and guys who just got promoted, etc etc.

One of the projects I'm working on is setting up UrBackup for full image backups as well as file level backups. We've been using CrashPlan for years, but that only really give us file level backup capabilities. The other day we had a backplane on one of our SAN units take a shit, and we lost connection to our storage for a bit. Luckily everything came back up fine, but I got to thinking what an epic pain it would be to rebuild some of our servers with just the file backups.

So after originally dismissing UrBackup a little while back, I decided to take another look at it. It turns out it is pretty bad ass! I was able to take an image backup of one of our VMWare VMs and restore it to a blank VM in about 20 minutes!

So it obviously worked great with a VMWare VM, but we also use XenServer pretty heavily in our environment. I wanted to test a restore on that as well. That didn't go so well.

You see I was backing up a Windows 2008 R2 VM, and when I went to restore it to a blank Windows 2008 R2 VM in XenServer I got this blue screen of death message!

STOP: 0x0000007B

Oh hell, what is that about?

Anyway, Googling it I found some forums where people say to run the following command in the XenServer terminal:

xe vm-param-set uuid=<UUID of the VM> platform:device_id=0001

Pro tip, that is bullshit. It didn't work at all.

You know what did work? Creating the blank VM using the Windows XP SP3 (32-bit) template!



Once I did that, and ran the restore again, the VM booted up just fine!

I don't know what is up with that template, but it's the one size fits all, never fails template. Plus, it doesn't matter if you are running a 64 bit OS or not!

I once wrote about issues with Ubunu in XenServer and the fix for that was to use a Windows XP template too!

Anyway, if you run into this issue. Try giving the Windows XP template a shot. You can thank me later!

If you need more than 4GB of RAM for your VM, you could also try the Windows 2003 64 bit template. It should work too.

Apr 25, 2018

Verge And PornHub MAKE HISTORY!



It finally happened. Verge talked a big game, claiming to announce the biggest partnership in the history of cryptocurrency. A lot of the haters and doubters scoffed, but XVG truly meant business. Now that this crypto is joining forces with PornHub, they are truly an unstoppable force, inevitably poised to surpass even Bitcoin. What a time to be alive.

Apr 24, 2018

How Can Blockchain Prevent Fraud in Payment-Processing Services?

What Is Payment Fraud?

With the growing popularity of online marketing and business, we are unfortunately facing new types of fraud. Fraud in payment-processing services is one of the most significant threats to all e-commerce markets, as its main working principles are based upon online transactions. It involves identity and private property theft, or illegal takeover of an individual’s payment information to make purchases or remove funds. To eliminate it, companies are setting up fraud detection using blockchain technologies.

In 2017, the global fraud detection prevention market was valued at $16.8 billion U.S. Areas in which fraud detection and prevention are applied include insurance claims, money laundering, electronic payments, and banking transactions, both online and offline.


When discussing deceitful schemes in payment processing, we should stress that the most common type of scam involves credit cards. As stated above, criminals use a stolen card or card details to commit illegal purchases or transfer money. The customer whose data is stolen may file a report, and, after numerous transactions, receive the money back. In the case of an illegal purchase, a retailer or business is penalized, and loses its money. Therefore, it is crucial to take action to protect your commerce from these types of losses.

Use Blockchain to Prevent and Detect Fraud

The principles of blockchain technology allow people to keep an open, transparent, cryptographically encrypted record of all kinds of transactions committed between two pseudo-anonymous parties. As this record is maintained in an absolutely decentralized manner, it is independent of local authorities and banks. Therefore, it is difficult to tamper with. Actions like double spending, a common problem in digital money transactions, are difficult to commit due to a consensus protocol that provides trust. Because the permanency of blockchain technology stores information privately between parties, it provides better security. And in case you need better security for your company, the Applicature team of experts can help you set up blockchain technology for fraud detection.

Blockchain offers a wide range of opportunities, and a great number of companies use it to gain financial security. According to Statista surveys, 23% of companies are using this technology to prevent scams, and for security clearance. This percentage comes in second to those who use it for international money transfers.


Advantages of Blockchain Technology in the Prevention of Payment Fraud


It is possible to detect and prevent illegal activities in payment processing without people’s involvement by using the following features that stop fraud with blockchain:

  • Permanence. It is impossible to disable the system, as it functions on various devices worldwide at the same time. All of the gadgets storing the complete history of transactions cannot be hacked at once. 
  • Transparency. As a chain with distinct blocks, the system keeps a record of all transactions in each of these blocks. If any corrections or additions occur in these records, they are to be verified and checked in the whole system of block validators, which are machines with strict rules that must be complied with. Any illegal interference will be noticed promptly, and the involved parties will be disabled from making such transactions. 
  • Immutability. Blockchain provides significant benefits for fraud detection. As soon as a record is entered into the system, it cannot be deleted or forged. 
  • Cryptography. Blockchain technology employs widely-adopted cryptography protocols that protect users’ identity. Validation and confirmation are possible only with unique digital signatures. This information cannot be tampered with or recreated by anyone due to the random nature of its creation. 
  • Postponed payments and multisignatures. If you need to pay for a certain product but do not trust the seller, Blockchain allows you to use multi-signature transactions for postponed payments. In this case, the seller receives money only when the buyer gets his goods. Delivery service (or any other trusted party) can act as an additional level of arbitration that assures the buyer has the funds and the seller sends the goods.

Though blockchain technology provides better security, it cannot protect against hacking into your digital wallet or identity theft on its own. It should be stressed that in order to increase protection, blockchains use the additional help of machine-learning capabilities. This technology works like an additional layer, analyzing the algorithms and models of users’ behaviour. For instance, personal data might be stolen or used, but no one can copy someone’s personal behaviour pattern fraudulently, as it is absolutely unique.

Blockchain fraud detection uses startups like Feedzai to provide safety solutions in the cryptocurrency community. Feedzai uses machine-learning technologies and information science to keep commerce safe.


So, if you want to secure your digital identity and prevent it from being tampered with, blockchain technology will protect against fraud cases like these. To sum up, your personal information should be placed in a blockchain framework accessed only by authorized participants who can verify and ensure its validity. Though thefts in payment processing still occur, it is very important to use special blockchains designed for businesses and users working with machine-learning software. Such technologies are designed to be resistant to vulnerabilities, and grant you greater security.

Apr 23, 2018

How to transfer files faster than 10Mbps in Windows

At my day job we have a disaster recovery (DR) site a few thousand miles away. Between our main data center and our DR site, we have a 100Mbps dedicated transport link. It's used to transfer files, and be the path for database mirroring etc.

One problem that we noticed was that when we needed to copy large files, it would take forever. We would use your typical SMB/CIFS share and copy files. What we noticed was that although the transport link was 100Mbps, our NICs were 1Gbps and our local switches are 1Gbps switches, we would still only see file transfers at around 10Mbps.

That's just not good enough. Especially when you have a lot of large files you need to copy relatively quickly. I'm talking hours not days here anyway.

Well to solve this problem, I remembered the good old fashioned Robocopy! In fact, the latest versions of Robocopy, that now comes pre-installed in Windows servers by the way, has a multi-thread feature!

To max out our 100Mbps transport link, I set the threads with my Robocopy to 20 and BAM! We were sending files at close to 100Mbps!

The command I ran was:

robocopy "C:\Source" "\\DestinationServer\Destination" /mt:20 /E /V /ETA /R:2 /W:5 /R:10

Pretty simple right? The /mt switch is where you can adjust the threads used.

I've seen lots of forums and posts with people talking about copying over Windows shares being limited to 10Mbps. Using Robocopy is certainly one effective workaround!

Do you know of a different way to maximize throughput when copying files in Windows? Let us know in the comments!

Apr 18, 2018

6 Ways How Business Can Use a Live Video Streaming App

Video is the content people are ready to consume for hours and hours. You and your business can definitely benefit from it, and in this article we will tell you, what you can use video for.

Already, big social networks and video services acknowledged its power. Netflix shows the extreme boost, and social networks get their users addicted to live streaming apps. Facebook, Instagram and other have also successfully onboarded this wave by opening their own streaming capabilities to the world, and saw the rapid growth immediately.

Some businesses create a video streaming app and use it as a marketing tool. 

Here are some numbers:

  • The views of branded content has increased 99% on YouTube… And 258% on Facebook! 
  • By 2019 80% of all internet traffic will be associated with videos.

As you see, promotion with the help of videos is very effective and will make your current and potential customers care for what you have to offer. However, in this article we will talk specifically about live video streaming. Why?

The main reason is that live streaming is even more effective. According to Tubular Insights, people spend 8 times more on live streams than on regular on-demand videos.

You may wonder, why is it so? From the first sight, it may seem that on-demand videos must be more popular, as people can see them whenever they have time. Live streams, on the other hand, are available only when they are being recorded, and some life issues can come in way.

However, live streams are really more engaging, because they are exclusive and they allow to talk with the viewers in real time via comments.

Here we’ll talk about the ways you can use live streaming for your business, and the best practices of connecting to your customers via broadcasting.

1. Host Webinars




Webinars are always popular. Share your knowledge with people and thus prove yourself an expert, and make your customers more loyal to what you do. Webinars can be either paid or free. The best practice is to host free ones, if your business hasn’t gained fame and reputation yet: this way you’ll be able to get more customers to watch you.


Paid webinars are a great option for companies that are already rather famous in their sphere, and customers already know something about them.

2. Host Q&A Sessions




Q&A sessions are the best way to add some personal touch to your business, to show that behind your logo and a website there are humans that are ready to help. Q&A sessions are great to reply to some concerns your clients might have, to educate them and to show how you work.

You can prepare to such session and gather all frequently asked questions. Another way to do it is to answer questions from comments or tweets with a hashtag in real time.

3. Stream Live Events



A conference or some other important events in your company are a great reason to take your smartphone and launch a live streaming app. Go ahead and show your customers how your company evolves and how your employees gain new experience. 

Live events are a great reason to stream them, whether it’s a presentation of a new products somewhere at the conference or a meeting with celebrities. Make your viewers feel like they are present there, don’t forget to react to their comments and show the best moments of the event.

4. Host an Interview




Interviews with experts and influencers are the best way to get viewer attention, as they will enjoy seeing someone they already know. The main thing when you host an interview is to make it lively and interesting. You really need to work on those questions you’re going to ask, and switch between topics frequently enough to avoid boredom and repetitiveness.

5. Show what’s behind the curtains




Don’t let your customers see only what you would normally show. It is important that you keep your broadcasts informal. Show how your business works on the inside, let customers meet those people who work for them every day to deliver the best service.

You can also show some details of your product creation process. People may enjoy what they get, but what really makes you unique in their heads, is your story. Make your brand personal and alive, and people will appreciate it.

6. Share Important News




Today people will prefer to watch a branded video or enjoy a live stream from a company than read a text, so the best way to tell your customers about any changes or new products in your company is to host a live broadcast and then make it available for later.

These are six ways you can make your business more memorable for customers with the help of live streaming.

Final Thoughts


The greatest thing about using live video streaming app for business is that you don’t need a production team and a big marketing budget to broadcast - just grab your Android or iOS device and think about things you’re going to share. Live streaming doesn’t have to be official or perfect - the most important thing about it is your open attitude and a genuine wish to share something awesome.

You can use any platform or social media on your phone to share live streams, but if you already have your own mobile app, you can add live streaming functionality to it. A mobile development company like Mobindustry can help you with that.

If it corresponds to your business model, you can also create an additional source of income by providing paid webinars with useful information that will educate the viewers.

Find your own creative ways to benefit from live broadcasting: it is definitely worth a shot!




Twitter Delicious Facebook Digg Stumbleupon Favorites More