Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[megaclisas-status] Handle arrays which don't report a drive count #17

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

tomhughes
Copy link

We have a CacheCade virtual disk configured which reports as:

ramoth [~] % sudo megacli -LDInfo -l2 -a0 -NoLog

Adapter 0 -- Virtual Drive Information:
CacheCade Virtual Drive: 2 (Target Id: 2)
Virtual Drive Type    : CacheCade 
Name          : 
RAID Level        : Primary-1, Secondary-0
State             : Optimal
Size          : 558.406 GB
Target Id of the Associated LDs : 0,3,1
Default Cache Policy  : WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Current Cache Policy  : WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU

Exit Code: 0x00

That currently fails, because there is no drive count, with:

ramoth [~] % sudo megaclisas-status 
-- Controller informations --
-- ID | Model
c0 | LSI MegaRAID SAS 9271-8iCC

-- Arrays informations --
-- ID | Type | Size | Status | InProgress
c0u0 | RAID1 | 465G | Optimal | None
c0u1 | RAID6 | 7633G | Optimal | None
Traceback (most recent call last):
  File "/usr/sbin/megaclisas-status", line 164, in <module>
    arrayinfo = returnArrayInfo(output,controllerid,arrayid)
  File "/usr/sbin/megaclisas-status", line 101, in returnArrayInfo
    if ldpdcount and (int(spandepth) > 1):
UnboundLocalError: local variable 'ldpdcount' referenced before assignment

@eLvErDe eLvErDe changed the title Handle arrays which don't report a drive count [megaclisas-status] Handle arrays which don't report a drive count Oct 13, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant