Quantcast
Channel: VMware Communities : Discussion List - All Communities
Viewing all 180259 articles
Browse latest View live

Purple Screen of death when installing a Guest on a DL 380 G6

$
0
0

Hi Everyone,

I just retrieve a DL 380 G6 server, with some disks and RAM. I wanted to Install ESXI 6.5.0 on it (with a special edited image).

When installing ESXI, i've got no problem, it boot normally etc...
The problem appears when i'm installing a OS on a Guest (whatever, Ubuntu, Debian or Windows). It start the installation and freeze after that, showing me this "beautiful" PSOD.

I did a memtest86 but there is no error.

It doesn't came from the ESXI image but of the server itself, cause i tried with other OS (Proxmox, debian, ubuntu) and all of them have problems at the installation.

So the PSOD is my only way to solve all this problems.

 

Here is a OCR of the screen, and the screen itself in case of bad recognation.

 

Thank you in advance everybody !

 

 

Muare ES)(i 6.5.0 [Releasebuild-5310538 x86_64]

LINT1/NMI (motherboard nonmaskable interrupt), undiagnosed. This may be a hardware problem; please contact your hardware vendor

r04Ix8001003d cr241x430000f80000 cr341xdf622000 cr441x216c

*PCPUO:70231/fssA10-27-7

PCPU 0: SUUSUSSV

ode start: 0x41801c800000 VMK uptime: 0:04:07:18.265

0x438040002c60J0x41801c8ec931]PanicvPanicInt0vmkernelltnover+0x545 stack: 0x41801c8ec931

0x438040002d00J0x41801c8ec9bd]Panic_MoSave@vmkernelltnover+0x4d stack: 0x438040002d60

0x438040002d60J0x41801c8e9c8e]MMICheckLintAvmkernelltnover+0x19a stack: 0x0

0x438040002e20J0x41801c8e9d24MMI_Interrupt0vmkernelltnover+0x94 stack: 0x0

0x438040002ea0J0x41801c92bb111IDTMMIWork0vmkernelltnover+0x99 stack: 0x0

0x438040002f20:I0x41801c92cfa1]Int2_MMI0vmkernelltnover+0x19 stack: 0x418040000000

0x438040002f40J0x41801c93c044]gate_entry_Pvmkernelltnover+0x0 stack: 0x0

0x439152b9bc70:I0x41801c88aec2Power_ArchSetCStatePvmkernelltnover+0x106 stack: 0x7fffffffffffffff

0x439152b9bca0J0x41801cac44b3]CpuSchedIdleLoopInt@vmkernelltnover+0x39b stack: 0x1

0x439152b9bd10:[0x41801cac6d6a]CpuSchedDispatch0vmkernelltnover+0x114a stack: 0x410000000001

0x439152b9be40:[0x41801cac7fe2]CpuSchedWait0vmkernelltnover+0x27a stack: 0x100430372f20080

0x439152b9bec0J0x41801cac8350]CpuSchedTimedWaitInt@vmkernelltnover+0xa8 stack: 0x206300002001

0x439152b9bf30:[0x41801cac8546]CpuSched_EventQueueTimedWait0vmkernelltnover+0x36 stack: 0x430372f20080

0x439152b9bf50:[0x41801c8c949c]helpFunc0vmkernelltnover+0x564 stack: 0x430078ff3050

0x439152b9bfe0J0x41801cac8c95]CpuSched_StartWorldPvmkernelltnover+0x99 stack: 0x0

lase fs=0x0 gs=0x418040000000 Kgs=0x0

'020-07-28T13:47:27.5477 cpu4:66541)Logs are stored on non-persistent storage. Consult product documentation to configure a sy

log server or a scratch partition.

oredump to disk. Slot 1 of 1 on device naa.600508b1001037383941424344450a00:9.

DiskDump: FAILED: Timeout

No file configured to dump data.

No vsan object configured to dump data.

No port for remote debugger. —Escape— for local debugger.

VMware ESXi 6.5.0 [Releasebuild-5310538 x86 641.png


Copy Fusion machine to Workstation - can't find darwin.iso

$
0
0

I have a VM Fusion machine that has evolved over the years; I am trying to copy it to VM Workstation 15. When it tries to startup, there is a message that it can't find Darwin.iso. The machine tries to start but encounters an error and restarts (unending cycle). I tried uninstalling vmTools in Fusion and then copying again but same results. How can I open a vmFusion machine in Workstation?

Instant Clone refresh works from Replica Connection Server only

$
0
0

I did my weekly refresh of my instant clone pool and found that after I added a Replica Connection Server (CS) that when pushing the new image from the Primary CS the process creates the necessary replica VM but does not move on to the next step of creating a parent-vm. It is then stuck in a 'Published' state with the 'Push Image' operation still running. From the Primary CS, from which I initiated the push, I do not have the option to cancel or reschedule the push, they are grayed out. Logging into the Replica CS I have the option to cancel or reschedule.

 

After cancelling and pushing from the Replica, everything spun up without issue. I am running Horizon 7.10.2.

 

This is week two of this and now the old template-replica-parent images are not deleting but the new snapshot is able to deploy. It seems unrelated, but I thought it should be mentioned.

 

Has anyone ran into something like this?

Upgrading/replacing existing connection server

UCS 6.7 u3 ESXi Patching

$
0
0

Hello,

The latest custom ESXi iso for Cisco servers available on VMware is 6.7.0 build number 14320388 released on 20/8/2019.

Is it recommended to install the newest embedded and installed patches that follow? (the latest is 09/06/2020 with build number 16316930) or it's better to keep the custom iso at it's original version. I am asking this to see if I will experience any issues with these servers if I decide to upgrade.

I don't have internet access on my vCenter, should I download the patch bundle from myvmware and upload it manually to VUM. And should I remediate it all or should I exclude some of the patch files?

Thank you!

 

 

 

The Small Footprint CIM Broker Daemon (SFCBD) for Dell R710 ESXI 6.5

$
0
0

ESXI 6.5 U3 updated from 6.0 U3 reports this to me:

 

The Small Footprint CIM Broker Daemon (SFCBD) is running, but no data has been reported. You may need to install a CIM provider for your storage adapter.

 

on my Dell R710, maybe ESXI is not compatible with 6.5 but if any of you can give me a solution I would be grateful

UEM Published App shortcut

$
0
0

I am trying to create a shortcut on users desktops to a Horizon Published App.  I have installed the Horizon client in my VDI template.   I can get UEM to create the shortcut to the published app, but when I try to launch it, I get this error.

 

 

 

If i just open the Horizon client with my VDI session and login, I can run the app just fine, I am entitled to the application.

 

anyone have any ideas?

run DNS server in VM to reach test apps by domain, possible without modification on the host system?

$
0
0

Hello,

I have several apps as a docker in VM for testing. Instead of putting on each on a different port, I would like to use a reverse proxy and call the apps by a subdomain of the domain for the VM.

 

In DNS on the NAT Network it says, I simply add a DNS server to the NAT.
So my question, if I run a DNS server in that VM I could reach a dockerize app by e.g. app1.myVMdomain.vmlocal , if the DNS server and reverse-proxy are configured properly?

 

I'm used to the fact, that a DNS resolution query goes via the DNS setting in the NIC.
I don't want to modify the settings of the NIC as I open the VM on different Windows machines.
According to How networks work: what is a switch, router, DNS, DHCP, NAT, VPN and a dozen of other useful things | articles about programming on mkdev, there are virtual switches/NICs for the VM. In my case there are

  • VMware Virtual Ethernet Adapter for VMnet1
  • VMware Virtual Ethernet Adapter for VMnet8

Would DNS resolution inquiries run through this adapters as well, so that the DNS server in my VM is asked for e.g. app1.myVMdomain.vmlocal?

I find only https://kb.vmware.com/s/article/2006955 and VMware Knowledge Base what comes close but not exactly the same I'm looking for.
Reading through https://superuser.com/questions/1388845/why-is-a-local-vm-not-resolving-via-dns-and-nslookup-in-the-way-i-am-thinking-it…  it looks like that I have to modify resolv.conf / NIC settings but the VM'IP is assigned by DHCP, so that is quite tricky.


Migrate from one VM to another VM

$
0
0

I have a VM running WIN 8 and I need to create one running WIN 10.

Can I use the "Migrate your PC…" menu option to move everything on the old VM to the new one so that I won't have to reinstall software?

If not, is there a way to do this?

 

I have to create a new VM instead of upgrading because it's a company install and has to have all of the IT Dept's extra stuff in it.

 

Thanks

Advice - Multiple independent networks using vlan

$
0
0

Hi all,

 

Would really appreciate some guidance.

 

Preface: VMware Workstation 15.5.6

 

I would like to run test lab networks, each one seperate from each other. For example, I may have a Windows Server 2019 VM with several Windows 10 VM's or I may have another network using Windows Server 2016 VM and several Windows 10 VM's.

 

Allowing them access on the network or ability to see one another would cause issues, DHCP for example would conflict with my router.

 

VMware Workstation virtual networking does not allow for VLAN's and only permits Bridged, NAT, and Host-only.

 

Only Bridged allows internet acess.  This is a must for me as I need a way to update all the virtual machines.  If they're stuck on NAT or Host Only, how do you keep your OS and apps up-to-date?

 

Any advice would be great.

 

Boyd

vm workstation 15 mouse issue

$
0
0

so i have window host w/linux virutal machine.  I remote from my home to this window host, start vm workstation, the mouse keep jumping around, make it unusable.    no issue with virtual box though

Move-VM error checks do not work

$
0
0

What my intention is if one of the check fails it should not got to the Move-VM it should display the appropriate error message and go to the next record on the CSV file

I have done that but it does not work, When everything is fione it works perfectly, if the portgroup or VM name or anything is incorrect it displays the correct error message also it throws red powercli errors and fails at Move-VM which is not very good. Can someone help to to add the right error trapping please

 

 

I have added the full script below and this is the part which i am having issues with

 

##################################################################################################

####################################################################################

# Kick off the vMotion VMs between Virtual Centers

####################################################################################

 

# Get the VM name to be migrated

 

##Command to Migrate VM ( includes compute and datastore live migration)

 

Foreach ($VMdetailin$VMdetails)  {

 

$VM=Get-VM-Name$VMdetail.VMNAME

 

 

If ($VM){

                        Write-Host"VM is a alive on the source VC..."-ForegroundColorYellow  $VM

}

Else

                        Write"VM Cannot be found" 

}

 

 

 

 

$destination=Get-cluster$vmdetail.TgCluster |Get-VMHost|Select-Object-First1

 

if ($destination){ 

                        Write-Host"VM will be placed on following ESXi host..."-ForegroundColorYellow$destination 

}

Else

                        Write"Destination ESXi host is not Accessible" 

}

              

 

$networkAdapter=Get-NetworkAdapter-vm$VM

               

If ($networkAdapter){ 

                        Write-Host"VM Network Adaptor info..."-ForegroundColorYellow$networkAdapter 

}

Else

                        Write"Network Adpater cannot be attahced and migration will fail"

}

 

 

$destinationPortGroup=Get-VDSwitch-Name$VMdetail.TgSwitch |Get-VDPortgroup-Name$VMdetail.TgPortGroup

 

If ($destinationPortGroup){ 

                        Write-Host"VM will be moved to following PortGroup... "-ForegroundColorYellow$destinationPortGroup 

}

Else

                        Write"Network Adpater cannot be attahced and migration will fail"

}

                

 

 

 

$destinationDatastore=Get-Datastore$VMdetail.TgDatastore

 

              

              

               

Move-VM-vm$VM-Destination$destination-NetworkAdapter$networkAdapter-PortGroup$destinationPortGroup-Datastore$destinationDatastore|out-null  -ErrorVariable$movevmerr

if ($movevmerr-eq$null )  {

Write-host" VM migration in progress ........."-ForegroundColorMagenta

}

Else   {

Write-Host" Move-VM error $movevmerr"

}

 

 

# Write-Warning " VM cannote be moved due to configuration errors or heartbeat issue "

 

 

}      

 

############################################################################################################

 

 

 

###Full Script###

 

Import-Module-NameVmware.powercli

 

####################################################################################

# Clear existing VC connections

####################################################################################

 

Clear-Host

 

try    {

            Disconnect-VIServer-Server$global:DefaultVIServers  -Confirm:$false   -Force-ErrorActionStop

           

            Write-Warning-Message" All Virtual Center connections are Disconnected "

}

Catch  {

            Write-host  "Administrator Message : There are no existing Virtual centre connection we are good to proceed "-ForegroundColorGreen

}

 

 

functionDrawline {

for($i=0; $i-lt (get-host).ui.rawui.buffersize.width; $i++) {write-host-nonewline-foregroundcolorcyan"-"}

}

 

 

 

 

####################################################################################

# Variables

####################################################################################

 

#$VCnames= Import-Csv -Path 'D:\Temp\NSX-T\VC.CSV' -UseCulture

$VCnames=Get-Content-Path'D:\Temp\NSX-T\VC.CSV'|Select-String'^[^#]'|ConvertFrom-Csv-UseCulture

 

 

 

 

# vCenter Source Details (SSO Domain Source UCP 4000)

 

$SrcvCenter=$VCnames.source

 

 

# vCenter Destination Details (SSO Domain Advisor )

 

$DstvCenter=$VCnames.destination

 

 

# vMotion Details

 

#$VMdetails = Import-csv -Path 'D:\Temp\NSX-T\Migration.CSV' -UseCulture

$VMdetails=Get-Content-Path'D:\Temp\NSX-T\Migration.CSV'|Select-String'^[^#]'|ConvertFrom-Csv-UseCulture

 

####################################################################################

 

 

 

####################################################################################

# Connect to source and destination Virtual centers

####################################################################################

 

# Connect to Source vCenter Servers

 

Connect-ViServer-Server$SrcvCenter-Credential ( Import-Clixml-PathD:\Temp\NSX-T\mycred.xml) -WarningActionIgnore|out-null

 

 

write-Host-foregroundcolorYellow"`nConnected to Source vCenter..."$SrcvCenter

 

 

 

# Connect to Destination vCenter Servers

 

Connect-ViServer-Server$DstvCenter-Credential ( Import-Clixml-PathD:\Temp\NSX-T\mycred.xml) -WarningActionIgnore|out-null

 

write-Host-foregroundcolorYellow"Connected to Destination vCenter..."$DstvCenter

 

Drawline

Write-Host"Processing ..................................."-ForegroundColorWhite

####################################################################################

 

 

 

####################################################################################

# Kick off the vMotion VMs between Virtual Centers

####################################################################################

 

# Get the VM name to be migrated

 

##Command to Migrate VM ( includes compute and datastore live migration)

 

Foreach ($VMdetailin$VMdetails)  {

 

$VM=Get-VM-Name$VMdetail.VMNAME

 

 

If ($VM){

                        Write-Host"VM is a alive on the source VC..."-ForegroundColorYellow  $VM

}

Else

                        Write"VM Cannot be found" 

}

 

 

 

 

$destination=Get-cluster$vmdetail.TgCluster |Get-VMHost|Select-Object-First1

 

if ($destination){ 

                        Write-Host"VM will be placed on following ESXi host..."-ForegroundColorYellow$destination 

}

Else

                        Write"Destination ESXi host is not Accessible" 

}

              

 

$networkAdapter=Get-NetworkAdapter-vm$VM

               

If ($networkAdapter){ 

                        Write-Host"VM Network Adaptor info..."-ForegroundColorYellow$networkAdapter 

}

Else

                        Write"Network Adpater cannot be attahced and migration will fail"

}

 

 

$destinationPortGroup=Get-VDSwitch-Name$VMdetail.TgSwitch |Get-VDPortgroup-Name$VMdetail.TgPortGroup

 

If ($destinationPortGroup){ 

                        Write-Host"VM will be moved to following PortGroup... "-ForegroundColorYellow$destinationPortGroup 

}

Else

                        Write"Network Adpater cannot be attahced and migration will fail"

}

                

 

 

 

$destinationDatastore=Get-Datastore$VMdetail.TgDatastore

 

              

              

               

Move-VM-vm$VM-Destination$destination-NetworkAdapter$networkAdapter-PortGroup$destinationPortGroup-Datastore$destinationDatastore|out-null  -ErrorVariable$movevmerr

if ($movevmerr-eq$null )  {

Write-host" VM migration in progress ........."-ForegroundColorMagenta

}

Else   {

Write-Host" Move-VM error $movevmerr"

}

 

 

# Write-Warning " VM cannote be moved due to configuration errors or heartbeat issue "

 

 

}      

 

 

 

#$vm | Move-VM -Destination $destination -NetworkAdapter $networkAdapter -PortGroup $destinationPortGroup -Datastore $destinationDatastore | out-null

 

 

####################################################################################

# Display VM information after Migration

####################################################################################

Drawline

 

Get-VM$VMdetails.VMNAME |Get-NetworkAdapter|

 

Select-Object @{N="VM Name";E={$_.Parent.Name}},

 

@{N="Cluster";E={Get-Cluster-VM$_.Parent}},

 

@{N="ESXi Host";E={Get-VMHost-VM$_.Parent}},

 

@{N="Datastore";E={Get-Datastore-VM$_.Parent}},

 

@{N="Network";E={$_.NetworkName}} |Format-List

 

Drawline

 

 

 

####################################################################################

# Disconnect all Virtual centers

####################################################################################

 

Write-Host" YOU WILL NOW BE DISCONNECTED FROM VIRTUAL CENTRE "-ForegroundColorMagenta

 

try     {

                Disconnect-VIServer-Server$global:DefaultVIServers  -Confirm:$false   -Force-ErrorActionStop

 

                Write-Warning-Message" All Virtual Center connections are Disconnected "

}

 

 

Catch   {

          

            Write-host  "Administrator Message : There are no existing Virtual centre connection we are good to proceed "-ForegroundColorGreen

 

}

 

 

 

 

NSX-T 3.0 to 3.0.1 upgrade fails on ESXi 7.0 host VIB upgrades due to "Failed to load module nsxt-vsip"

$
0
0

Problem: Cannot upgrade ESXi 7.0 hosts from NSX-T 3.0.0 VIBs to 3.0.1 VIBs.

 

Scenario: vCenter 7.0, ESXi 7.0, NSX-T 3.0.0, ESXi hosts are running N-VDS exclusively (2 pNIC, upgraded from NSX-T 2.5).

 

NSX-T Edge upgrade from 3.0.0 to 3.0.1 was successful, but none of the ESXi hosts in the 4-node cluster are able to have their VIBs upgraded from 3.0.0 to 3.0.1.  Error message in NSX-T Manager is:

 

Install of offline bundle failed on host 09e41e11-6ce5-4fd8-a4ad-3295f927e540 with error : [LiveInstallationError] Error in running ['/etc/init.d/nsx-datapath-dl', 'start', 'upgrade']: Return code: 1 Output: start upgrade begin Exception: Traceback (most recent call last): File "/etc/init.d/nsx-datapath-dl", line 963, in <module> DualLoadUpgrade() File "/etc/init.d/nsx-datapath-dl", line 835, in DualLoadUpgrade LoadKernelModules() File "/etc/init.d/nsx-datapath-dl", line 180, in LoadKernelModules nsxesxutils.loadModule(modName, modParam) File "/usr/lib/vmware/nsx-esx-datapath/lib/python3.5/nsxesxutils.py", line 360, in loadModule (moduleName, out.decode())) Exception: Failed to load module nsxt-vsip-16404614: vmkmod: VMKModLoad: VMKernel_LoadKernelModule(nsxt-vsip-16404614): Failure Cannot load module nsxt-vsip-16404614: Failure vmkmod: VMKModLoad: VMKernel_LoadKernelModule(nsxt-vsip-16404614): Failure Cannot load module nsxt-vsip-16404614: Failure It is not safe to continue. Please reboot the host immediately to discard the unfinished update. Please refer to the log file for more details..

 

Error in esxupdate.log:

 

[LiveInstallationError]

Error in running ['/etc/init.d/nsx-datapath-dl', 'start', 'upgrade']:

Return code: 1

Output: start upgrade begin

Exception:

Traceback (most recent call last):

   File "/etc/init.d/nsx-datapath-dl", line 963, in <module>

     DualLoadUpgrade()

   File "/etc/init.d/nsx-datapath-dl", line 835, in DualLoadUpgrade

     LoadKernelModules()

   File "/etc/init.d/nsx-datapath-dl", line 180, in LoadKernelModules

     nsxesxutils.loadModule(modName, modParam)

   File "/usr/lib/vmware/nsx-esx-datapath/lib/python3.5/nsxesxutils.py", line 360, in loadModule

     (moduleName, out.decode()))

Exception: Failed to load module nsxt-vsip-16404614: vmkmod: VMKModLoad: VMKernel_LoadKernelModule(nsxt-vsip-16404614): Failure

Cannot load module nsxt-vsip-16404614: Failure

vmkmod: VMKModLoad: VMKernel_LoadKernelModule(nsxt-vsip-16404614): Failure

Cannot load module nsxt-vsip-16404614: Failure

 

It is not safe to continue. Please reboot the host immediately to discard the unfinished update.

Please refer to the log file for more details.

 

Installing the VIBs manually via Lifecycle Manager (KB 78682) fails with exact same error as above in esxupdate.log.  Installing VIBs via CLI (KB 78679) results in same error.  I verified the boot banks (KB 74864) and they have plenty of free space (95% free space).  ESXi install was fresh install of ESXi 7.0 on 128 GB boot from SAN LUN (SSD).

 

I never had any problems with NSX-T VIB upgrades from 2.4.x to 2.5.x to 3.0.0, so I'm curious why the 3.0.0 -> 3.0.1 upgrade is so challenging.  Has anyone else run into this?

 

Thanks,

Bill

dead I/O on igb-nic (ESXi 6.7)

$
0
0

Hi,

 

I'm running a homelab with ESXi 6.7 (13006603). I got three nics in my host, two are onboard and one is an Intel ET 82576 dual-port pci-e card. All nics are assigned to the same vSwitch; actually only one is connected to the (physical) switch atm.

When I'm using one of the 82576 nics and put heavy load on it (like backing up VMs via Nakivo B&R) the nic stops workign after a while and is dead/Not responding anymore. Only a reboot of the host or (much easier) physically reconnecting the nic (cable out, cable in) solves the problem.

 

I was guessing there is a driver issue, so I updated to the latest driver by intel:

 

 

[root@esxi:~] /usr/sbin/esxcfg-nics -l

Name    PCI          Driver      Link Speed      Duplex MAC Address       MTU    Description

vmnic0  0000:04:00.0 ne1000      Down 0Mbps      Half   00:25:90:a7:65:dc 1500   Intel Corporation 82574L Gigabit Network Connection

vmnic1  0000:00:19.0 ne1000      Up   1000Mbps   Full   00:25:90:a7:65:dd 1500   Intel Corporation 82579LM Gigabit Network Connection

vmnic2  0000:01:00.0 igb         Down 0Mbps      Half   90:e2:ba:1e:4d:c6 1500   Intel Corporation 82576 Gigabit Network Connection

vmnic3  0000:01:00.1 igb         Down 0Mbps      Half   90:e2:ba:1e:4d:c7 1500   Intel Corporation 82576 Gigabit Network Connection

[root@esxi:~] esxcli software vib list|grep igb

net-igb                        5.2.5-1OEM.550.0.0.1331820            Intel   VMwareCertified   2019-06-16

igbn                           0.1.1.0-4vmw.670.2.48.13006603        VMW     VMwareCertified   2019-06-07

 

Unfortunately this didn't solve the problem.

 

However ... this behaviour doesn't occur, when I'm using one of the nics using the ne1000 driver.

 

Any idea how to solve the issue?

(... or at least dig down to it's root?)

 

Thanks a lot in advance.

 

Regards

Chris

 

PS: I found another thread which might be connected to my problem: Stopping I/O on vmnic0  Same system behaviour, same driver.

vSphere 6.7 VCSA Cannot Edit Scheduled tasks through GUI, need to create a script to delete all Scheduled Tasks and then re-create them with correct runtimes

$
0
0

I have manually created scheduled tasks for all my prod VMs to take snapshots and send confirmation email at specific times which are prior to allowing Windows Updates.

 

Some of these jobs ran ok and others did not even start. Furthermore I cannot edit them, I would need to delete and recreate them.

 

I am hoping for some help to create a script to delete all Scheduled Tasks with a specific job name like following but I need to know how to do the delete.

 

Get-VIScheduledTasks | Where-Object {$_.Name -like '*PRE WSUS*'} | Format-Table -Autosize

 

Once I have done this I want to create a script which pulls in VM names and other fields like time and snapshot names from a CSV to create the scheduled tasks.

 

I see some scripts for this which I can borrow from but any help much appreciated.


VMware Revert from Snapshot Error "Trust Relationship between Workstation and Domain Failed

$
0
0

I took a snapshot of a VM in vmware and then after a week I tried to revert the snapshot and then logon to the system and now I get this error, since the snapshot has the old password from CyberArk PAS solution. VMware Revert from Snapshot Error "VMware Revert from Snapshot Error "Trust Relationship between Workstation and Domain Failed", I am unable to login now with local sysadmin account or domain account to revert it to workgroup and add it back to the domain. I tried using this Powershell script from this article and still no luck. How can I fix this?

vRO 8.1 cant use scroll wheel in code editor with Firefox

$
0
0

Exactly as per the subject.

 

Scroll wheel wont work in the code window. You have to click and drag the scroll bar.

Works great in Chrome, so it seems it may be something odd in Firefox.

 

Anyone else seen this?

 

Also... is there a Dark mode coming?

 

Cheers


Dean

Linux Host : Kernel 5.8-rc1/rc2. vmware-modconfig segfaults. vmware runtime causes system reboot when Guest VM start is attempted

$
0
0

On x86_64 systems, Fedora 32, VMware Workstation 15.5.6 with latest vmmon/vmnet patches.  

Host system Kernels tested are 5.8-rc1 and 5.8-rc2 (kernel.org), and Fedora 5.8.0-0.rc1.20200617git69119673bd50.1.fc33.x86_64

vmmon/vmnet with patches compile and load OK.

vmware-modconfig segfaults:

# vmware-modconfig –console –install-all
[AppLoader] GLib does not have GSettings support.
vmware-modconfi[14782]: segfault at 0 ip 00007f786b9d47f7 sp 00007ffc52a36cf8 error 4 in libc-2.31.so[7f786b95a000+150000]
Code: Bad RIP value.
Segmentation fault (core dumped)

vmware runtime segfaults:

$ vmware
/usr/bin/vmware: line 105: 13529 Segmentation fault (core dumped) “$BINDIR”/vmware-modconfig –appname=”VMware Workstation” –icon=”vmware-workstation”

Run vmware binary directly, and VMware Workstation window/menu displays correctly

# /usr/lib/vmware/bin/vmware

Start up guest operating system, then after ‘waiting for connection‘ the host system immediately reboots, with no prior errors or warnings shown. 

This has been reproduced on three different systems (HP Z220, HP 800G1, and Dual Xeon E5-2687W)

Robert Gadsdon.  June 23rd 2020.

Deep investigation on GPU Passthrough not working anymore after upgraded from 6.5 to 6.7, what's different on PCIe resetting?

$
0
0

I have been tried for a month to investigate on the GPU passthrugh issue of 6.7, Here is what I found.

 

Motherboard: MX32-L40 (a Gigabyte Serverboard which officially announced support ESXi 6.5, All ESXi passthrough requirements are meet by this MB)

VM OS: Windows 10 1809 Oct

ESXi Version: ESXi6.5u2(with latest patch), ESXi6.7u1(with latest patch)

GPU: I tried both AMD RX590 and Nvidia 1660Ti

Passthroughed Devices: All sub devices of the GPU, including HDMI audio and related bus.

 

Issue:

Basically,

if I start the VM the first time after ESXi host started, the GPU just works like a charm.

If I restart or stop/start the VM, the GPU device stopped working with a warning in device manager, error code 43.

If I disable the GPU before a VM restart/stop-start in device manager, then I'm able to re-enable the GPU after the VM reboot.

 

First, I'm pretty sure all of the following tweak doesn't help:

 

  1. UEFI or Legacy boot of ESXi host
  2. UEFI or BIOS boot of Windows 10 VM
  3. ESXi 6.5(with latest patch) or ESXi 6.7(with latest patch)
  4. AMD Rx590 or Nvidia 1660 Ti
  5. pciPassthru.use64bitMMIO
  6. hypervisor.cpuid.v0
  7. pciHole.start/end
  8. svga.present

 

I tried them one by one, with ALL combinations, which took me several days, since server MBs are really slow to boot.

The conclusion is the same,

If it's the first time starting the VM after ESXi boot, the GPUs just works. If I reboot/stop-start the VM, then the GPUs stopped working with error code 43.

 

Then I realized it's a PCIe resetting issue. so I tried the following /etc/vmware/passthrough.conf combinations:

 

# NVIDIA

 

10de  ffff  link   false

10de  ffff  bridge   false

10de  ffff  d3d0   false

10de  2182  link   false

10de  2182  bridge   false

10de  2182  d3d0   false

 

# AMD Video Card

 

1002 ffff link false

1002 ffff bridge false

1002 ffff d3d0 false

 

It took me a whole week to try ALL those combinations. Finally, I found that, ONLY ONE combination works for me:

  • ESXi 6.5
  • 10de  2182  d3d0   false

 

Then I tried to upgrade the ESXi to 6.7u1 with the SAME settings, it just doesn't work anymore.

 

I found something interesting in the log. When resetting the PCIe devices,

 

ESXi 6.5 resets them ONE BY ONE, with 4 seconds interval:

 

2019-03-07T05:56:29.586Z| vcpu-0| I125: UHCI: HCReset

2019-03-07T05:56:29.593Z| vcpu-0| I125: PCIPassthru: Resetting Device at 0000:50:00.0    // This is my GPU

2019-03-07T05:56:33.603Z| vcpu-0| I125: PCIPassthru: Resetting Device at 0000:50:00.1    // This is my GPU

2019-03-07T05:56:37.613Z| vcpu-0| I125: PCIPassthru: Resetting Device at 0000:50:00.2    // This is my GPU

2019-03-07T05:56:41.622Z| vcpu-0| I125: PCIPassthru: Resetting Device at 0000:50:00.3    // This is my GPU

2019-03-07T05:56:45.632Z| vcpu-0| I125: PCIPassthru: Resetting Device at 0000:72:00.0

2019-03-07T05:56:49.692Z| vcpu-0| I125: NVME-PCI: PCI reset on controller nvme0.

 

while ESXi 6.7 resets them in a batch, without intervals:

 

2019-03-07T09:08:05.219Z| vcpu-0| I125: UHCI: HCReset

2019-03-07T09:08:05.223Z| vcpu-0| I125: PCIPassthru: Resetting Device at 0000:50:00.0    // This is my GPU

2019-03-07T09:08:05.224Z| vcpu-0| I125: PCIPassthru: Resetting Device at 0000:50:00.1    // This is my GPU

2019-03-07T09:08:05.225Z| vcpu-0| I125: PCIPassthru: Resetting Device at 0000:50:00.2    // This is my GPU

2019-03-07T09:08:05.225Z| vcpu-0| I125: PCIPassthru: Resetting Device at 0000:50:00.3    // This is my GPU

2019-03-07T09:08:05.227Z| vcpu-0| I125: PCIPassthru: Resetting Device at 0000:72:00.0

2019-03-07T09:08:09.258Z| vcpu-0| I125: NVME-PCI: PCI reset on controller nvme0.

 

There must be some different between 6.5 and 6.7 the way they reset the PCIe devices.

Anyone know what's the difference and how to make it work in 6.7?

Issues with published Linux applications

$
0
0

Hello,

 

Now I've tried the most anticipated feature (least for HiEd) of Horizon 8: Linux applications.

 

However there seems to be few issues with it:

- If a user launches several applications they all appear in a same window in a stack. Most recently started on the top.

- If a user minimizes an application, it's lost from the client view.

- If an application has a indication bar gadget (like the Remmina), the application gives an error about it.

- It takes quite a time to start the applications, also the subsequent ones, even they are all eventually started from the same underlying session, which means that there is only a single login.

- Horizon Client application window doesn't show the name of an open application in the title bar.

- From Horizon HTML5 Client (launched with a Chrome), the Linux applications doesn't appear at all. Only gray screen is shown.

 

Test setup:

Horizon Client 2006 for Mac OS

VM: Ubuntu 18.04 LTS ( 2006 agent installed with --multi-session parameter), Mate Desktop environment, Instant clone farm, SSSD authentication provider, Microsoft Active Directory bound computer accounts

UAG 3.10

 

I think that VMware cannot never beat Microsoft in their own field (I mean WVD), but WVD doesn't have one thing at all: Linux Applications, which is super cool for Universities, where Linux has a strong base on the field of natural sciences.

 

br, Perttu

Viewing all 180259 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>