且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

我的脚本花了12个小时+任何建议?

更新时间:2023-12-04 15:16:29

它缩进给我 - Netscape / Mozilla

John Roth写道:
您的脚本在Outlook Express中没有正确缩进,
使其难以阅读

创造者 < CP ****** @ pacific.net.au>在消息中写道
新闻:3f *********************** @ freenews.iinet.net。 au ...
您好我制作了一个脚本,用于处理平面阴影3D



网格物体的法线。 />

它将每个顶点与每个其他顶点进行比较,以查找可以共享法线的顶点并且需要很长时间。

我不会问任何人重写脚本 - 只是看看任何可能吸引时间的愚蠢错误。


#!/ usr / bin / python
##############
#AUTOSMOOTH#
##############
import sys
import os
导入字符串
导入数学

#用于编写不包含字母的浮点数/
def saneFloat(浮点数):
# return''%(float)b''%vars()#6 fp as house.hqx
return''%f''%float#10 fp

#Open file from the命令行,转入列表并关闭它。
file = open(sys.argv [-1],''r'')
fileLineList = file.readlines()
文件。关闭

#Remembe r进度指示的行数。
fileLen = len(fileLineList)

#Autosmooth值。更高的自动平滑更大的角度。
maxDiff = 1.66
#循环直线。
lineIndex = 0
而lineIndex< len(fileLineList):

#Find Geom TAG ..
如果str(fileLineList [lineIndex])[0:8] ==''几何'':
lineIndex + = 1
如果循环播放文件,则#break;
如果是lineIndex> len(fileLineList):
中断

#这里我们记得已处理的行。
#需要为每个geom对象重置。
listOfDoneLines = [ ]
#开始一个新的循环,检查当前的顶点与其他所有其他的
newLoopindex = lineIndex
而len(string.split(fileLineList [newLoopindex]))== 12 :
打印''\ n'',fileLen,newLoopindex,

#vertexnum = newLoopindex - lineIndex

#比较2行
newCompareLoopindex = newLoopindex + 1#比较当前的顶点到这个新的顶点。
thisPassDoneLines = []#act在这个与每个顶点比较之后这是一个
thisPassDoneNormals = []
而len( string.split(fileLineList [newCompareLoopindex]))== 12:
#使用2 if'来加速进程,只有当
尚未评估时才拆分字符串。
如果newCompareLoopindex不在listOfDoneLines中:
comp1 = string.split(fileLineList [newLoopindex])
comp2 = string .split(fileLineList [newCompareLoopindex])
如果[comp1 [0],comp1 [1],comp1 [2]] == [comp2 [0],comp2 [1],comp2 [2] ]:

如果newLoopindex不在listOfDoneLines中:#只需要添加一次
listOfDoneLines.append(newLoopindex)
如果newLoopindex不在thisPassDoneLines中:#只需要待添加
thisPassDoneLines.append(newLoopindex)
thisPassDoneNormals.append([eval(comp1 [8]),eval(comp1 [9]),
eval(comp1) [10])])

listOfDoneLines.append(newCompareLoopindex)
thisPassDoneLines.append(newCompareLoopindex)
thisPassDoneNormals.append([eval(comp2 [8]),eval(comp2) [9]),
eval(comp2 [10])])
打印''#'',

newCompareLoopindex + = 1

如果len(thisPassDoneLines)> 1:#Ok我们有一些顶点可以平滑。

#循环遍历所有顶点并为每个顶点分配一个新的法线。
forPassDoneLines中的tempLineIndex:

tempSplitLine = string.split(fileLineList [tempLineIndex])

#我们为每个相似的顶点添加这些,然后将它们分开以获得平均值。
NormX = 0
NormY = 0
NormZ = 0
#我们将创建的垂直线标记列表,用于存储具有接近我们的法线的顶点

thisVertFrendsCount = 0
#这将当前顶点与所有其他顶点进行比较,如果它们是关闭然后添加到vertFrends。
for thisPassDoneNormals中的tNorm:#tNorm仅用于这个普通人中的一个是这个普通人,如果是abs(eval(tempSplitLine [8]) - tNorm [0])+
abs(eval(tempSplitLine [9]) - tNorm [1])+ abs(eval(tempSplitLine [10])
-tNorm [2])< maxDiff:

#maxDiff
NormX + = tNorm [0]
NormY + = tNorm [1]
NormZ + = tNorm [2]

thisVertFrendsCount + = 1

#Now常数除以常数。
NormX / = thisVertFrendsCount
NormY / = thisVertFrendsCount
NormZ / = thisVertFrendsCount

#make unit length vector。
d = NormX * NormX + NormY * NormY + NormZ * NormZ
如果d> 0:
d = math.sqrt( d)
NormX / = d; NormY / = d; NormZ / = d

#将法线写入当前行
tempSplitLine [8] = str(saneFloat(NormX))
tempSplitLine [9] = str(saneFloat(NormY) ))
tempSplitLine [10] = str(saneFloat(NormZ))

fileLineList [tempLineIndex] = string.join(tempSplitLine)+''\ n''
newLoopindex + = 1

lineIndex + = 1

#写入文件
#file写入
file = open(sys.argv [-1],''w'')
file.writelines(fileLineList)
file.close()







" Ideasman" &LT; CP ****** @ pacific.net.au&GT;在消息中写道

新闻:3f *********************** @ freenews.iinet.net。 au ...
嗨我有一个脚本处理平面阴影3D
网格的法线。它将每个顶点与每个其他顶点进行比较,以查找
可以共享法线的顶点,并且需要很长时间。




我想知道是否有一些方法可以排序或分类顶点,这样你就可以完成nxn比较。


TJR


嗨。


首先,查看文件内容的样本可能会有所帮助。

从中提取信息。线条是什么样的?它们包含哪些信息

?每行是否包含相同类型的信息?什么

是每行中包含的信息的一部分,它们如何与你试图解决的问题有关?
?有多少''顶点''(顶点?)




描述你用英语申请的算法。为了解决这个问题你需要采取哪些步骤?b $ b它们在这段代码中并不明显。

你能不能用一些函数将这么多代码分解成可管理的

块?


说实话,我不知道你的代码试图做什么。你说它确实是这个:


" Ideasman" &LT; CP ****** @ pacific.net.au&GT;在消息中写道

新闻:3f *********************** @ freenews.iinet.net。 au ...

嗨我有一个脚本处理平面阴影3D
网格的法线。它将每个顶点与每个其他顶点进行比较,以查找可以共享法线的顶点,并且需要很长时间。




确定。我将假设文件中的每一行代表与一个''vert'相关的

信息(无论是什么)。所以,我建议,

遍历文件,一次一行,构建一个包含此信息的vert

对象列表。类似于:


fd =文件(文件名)

verts = [Vert(* line.split())for fd中的行]

fd.close()


其中Vert()是Vert类的构造函数:


class Vert(object) :

def __init __(自我,什么,永远,信息,

将,是,分割,来自,行):

self .what = what

...


def share_normals(self,other):

"返回True时自我和其他人可以分享法线

...

...

所以,现在你有一个Verts列表。如果你没有从文件中获得这些

Verts的''normals',并且必须计算它们,

添加''normalize(self) ''到Vert类的方法,并在适当的时候在里面调用

__init __()。


所以,现在我们有一个标准化Verts的列表。我们想要将每个

Vert相互比较,以寻找可以共享法线的Verts。

好​​的。假设


verts = [v0,v1,v2,v3]#其中v0,v1,v2,v3都是Vert实例。


我们想看看v0是否可以与v1,v2和/或v3共享法线;

以及v1是否可以与v0,v2和/或v3共享法线;等等。


如果我检查v0.share_normals(v1),我是否需要检查v1.share_normals(v0)?

可能不是。但话说回来,我不知道我在说什么:)


假设我是对的,我们不需要检查v1.share_normals (v0),然后

我们应该保留一个记录。我会假设一个顶点可以与自己共享一个正常的
,所以我们不需要为此保留记录。


我建议做一个锯齿状的阵列。称之为共享。按如下方式构建:

SHARED = [[vert.share_normal(other)for verts [i + 1:]]

for i,vert in enumerate(verts) [:-1])]


而且,当我们完成时,''共享''是一个充满零的锯齿状阵列和

,像这样:

0 1 2 3

0 - [[1,0,1],

1 - - [0, 1],

2 - - - [0]]

3 - - - -

#verts index列于桌子外面

共享== [[1,0,1],[0,1],[0]]


然后我们制作如下函数:


def is_shared(index1,index2):
如果index1 == index2,则返回
:返回True#a vert与其共享一个法线

#调整锯齿状列表索引

如果index1> index2:

index1,index2 = index2,index1

index2 - = index1 + 1

返回SHARED [index1] [index2]


所以我们可以用它来询问,如果v0和v3分享法线使用


''is_shared(0,3)''或者''is_shared(3,0)''#SHARED [0] [2] ==真实


然后我们可以使用verts [0]和

verts [3]。


通过存储这些信息,我们避免重新计算v0和

v3是否共享法线我们每次都需要知道这一点。并且,通过以锯齿状数组存储

信息,我们节省了一些空间。


#注意:如果v0.share_normal(v1)和v1.share_normal (v2),

#然后v0.share_normal(v2)也必须为True。

#所以我们可以让SHARED成为字典而不是


SHARED = {0:[0,1,3],#v0与自身共享法线,v1和v3

1:[0,1,3],

2:[2]

3:[0,1,3]}


构建此字典可能会节省一些share_normal()调用:


SHARED = {}

for i,vert in enumerate(verts):

SHARED [i] = [i]

为j,其他为枚举(verts):

如果j == i:继续

如果vert.share_normal(其他):

如果j< i:

#较早的vert将累积所有这个顶点共享伙伴的b $ b#,所以只需使用那些

SHARED [i] =共享[j]

休息

其他:

共享[i] .append(j)

然后,看看是否v0.share_normal(v3),我们可以在SHARED [0]中询问


3#反之亦然


但是,这比锯齿状列表版本占用更多空间。查询

较慢。


可能更好的是直接分区,如下所示:


SHARED = [[0,1,3],[2]]#我会留下如何解决这个问题...


这样占用的空间更少比以前的每个版本都好。

我们如何使用它来判断是否v0.share_normal(v3)?


新版本的is_shared(index1, index2)必须

找到包含index1的元组,然后查看该元组中的index2是否也是
。这个查找是最慢的!

然而,在所有这些之后,我仍然不能说这是否已经用于了你的b $ b,因为

我对你的代码的含义知之甚少

在做什么。


希望这有点儿在标记附近。另外,请注意上面没有测试过任何上面的

代码




祝你好运做,

肖恩



Hi I have a made a script that process normals for a flat shaded 3D mesh''s.
It compares every vert with every other vert to look for verts that can
share normals and It takes ages.

I''m not asking anyone to rewrite the script- just have a look for any
stupid errors that might be sucking up time.


#!/usr/bin/python
##############
# AUTOSMOOTH #
##############
import sys
import os
import string
import math

# Used to write floats that dont'' contain letters/
def saneFloat(float):
#return ''%(float)b'' % vars() # 6 fp as house.hqx
return ''%f'' % float # 10 fp

#Open file from the command line, turn into a list and close it.
file = open(sys.argv[-1], ''r'')
fileLineList = file.readlines()
file.close

# Remember the number of lines for progress indication.
fileLen = len(fileLineList)

# Autosmooth value. Higher will autosmooth larger angles.
maxDiff = 1.66

# Loop through the lines.
lineIndex = 0
while lineIndex < len(fileLineList):

#Find Geom TAG..
if str(fileLineList[lineIndex])[0:8] == ''Geometry'':
lineIndex += 1
# break if looping beyong the file,
if lineIndex > len(fileLineList):
break

# Here we remember lines that have been processed.
# it needs to be reset for each geom object.
listOfDoneLines = []

# Start a new loop that checks the current vert against all the others
newLoopindex = lineIndex
while len(string.split(fileLineList[newLoopindex])) == 12:
print ''\n'', fileLen, newLoopindex,

#vertexnum = newLoopindex - lineIndex

# Compare the 2 lines
newCompareLoopindex = newLoopindex + 1 # compare the current vert to
this new one.
thisPassDoneLines = [] # act apon this after comparing with each vert
thisPassDoneNormals = []
while len(string.split(fileLineList[newCompareLoopindex])) == 12:

# Speed up the process by using 2 if''s, splitting the string only if
it has not been evaluated already.
if newCompareLoopindex not in listOfDoneLines:
comp1 = string.split(fileLineList[newLoopindex])
comp2 = string.split(fileLineList[newCompareLoopindex])

if [comp1[0], comp1[1], comp1[2]] == [comp2[0], comp2[1], comp2[2]]:

if newLoopindex not in listOfDoneLines: # Only needs to be added once
listOfDoneLines.append(newLoopindex)

if newLoopindex not in thisPassDoneLines: # Only needs to be added
once
thisPassDoneLines.append(newLoopindex)
thisPassDoneNormals.append([eval(comp1[8]), eval(comp1[9]),
eval(comp1[10])])

listOfDoneLines.append(newCompareLoopindex)
thisPassDoneLines.append(newCompareLoopindex)
thisPassDoneNormals.append([eval(comp2[8]), eval(comp2[9]),
eval(comp2[10])])
print ''#'',

newCompareLoopindex += 1

if len(thisPassDoneLines) > 1: # Ok We have some verts to smooth.
# This loops through all verts and assigns each a new normal.
for tempLineIndex in thisPassDoneLines:

tempSplitLine = string.split(fileLineList[tempLineIndex])

# We add to these for every vert that is similar, then devide them
to get an average.
NormX = 0
NormY = 0
NormZ = 0

# A list of vert line indicies that we will create to store verts
that have normals close to ours.
thisVertFrendsCount = 0

# This compares the current vert with all the others, if they are
close then add to vertFrends.
for tNorm in thisPassDoneNormals: # tNorm is just used for one of
the normals in the thisPassDoneNormals

if abs(eval(tempSplitLine[8]) - tNorm[0]) +
abs(eval(tempSplitLine[9]) - tNorm[1]) + abs(eval(tempSplitLine[10])
-tNorm[2])< maxDiff:

#maxDiff
NormX += tNorm[0]
NormY += tNorm[1]
NormZ += tNorm[2]

thisVertFrendsCount += 1
#Now devide the normals by the number of frends.
NormX /= thisVertFrendsCount
NormY /= thisVertFrendsCount
NormZ /= thisVertFrendsCount

# make unit length vector.
d = NormX*NormX + NormY*NormY + NormZ*NormZ
if d>0:
d = math.sqrt(d)
NormX/=d; NormY/=d; NormZ/=d
# Write the normal to the current line
tempSplitLine[8] = str(saneFloat(NormX))
tempSplitLine[9] = str(saneFloat(NormY))
tempSplitLine[10] = str(saneFloat(NormZ))

fileLineList[tempLineIndex] = string.join(tempSplitLine) + ''\n''

newLoopindex += 1

lineIndex += 1
# Writing to file
# file to write
file = open(sys.argv[-1], ''w'')
file.writelines(fileLineList)
file.close()

Its indenting for me- Netscape/Mozilla
John Roth wrote:
Your script isn''t indented properly in Outlook Express,
making it very difficult to read

John Roth

"Ideasman" <cp******@pacific.net.au> wrote in message
news:3f***********************@freenews.iinet.net. au...
Hi I have a made a script that process normals for a flat shaded 3D



mesh''s.

It compares every vert with every other vert to look for verts that can
share normals and It takes ages.

I''m not asking anyone to rewrite the script- just have a look for any
stupid errors that might be sucking up time.


#!/usr/bin/python
##############
# AUTOSMOOTH #
##############
import sys
import os
import string
import math

# Used to write floats that dont'' contain letters/
def saneFloat(float):
#return ''%(float)b'' % vars() # 6 fp as house.hqx
return ''%f'' % float # 10 fp

#Open file from the command line, turn into a list and close it.
file = open(sys.argv[-1], ''r'')
fileLineList = file.readlines()
file.close

# Remember the number of lines for progress indication.
fileLen = len(fileLineList)

# Autosmooth value. Higher will autosmooth larger angles.
maxDiff = 1.66

# Loop through the lines.
lineIndex = 0
while lineIndex < len(fileLineList):

#Find Geom TAG..
if str(fileLineList[lineIndex])[0:8] == ''Geometry'':
lineIndex += 1
# break if looping beyong the file,
if lineIndex > len(fileLineList):
break

# Here we remember lines that have been processed.
# it needs to be reset for each geom object.
listOfDoneLines = []

# Start a new loop that checks the current vert against all the others
newLoopindex = lineIndex
while len(string.split(fileLineList[newLoopindex])) == 12:
print ''\n'', fileLen, newLoopindex,

#vertexnum = newLoopindex - lineIndex

# Compare the 2 lines
newCompareLoopindex = newLoopindex + 1 # compare the current vert to
this new one.
thisPassDoneLines = [] # act apon this after comparing with each vert
thisPassDoneNormals = []
while len(string.split(fileLineList[newCompareLoopindex])) == 12:

# Speed up the process by using 2 if''s, splitting the string only if
it has not been evaluated already.
if newCompareLoopindex not in listOfDoneLines:
comp1 = string.split(fileLineList[newLoopindex])
comp2 = string.split(fileLineList[newCompareLoopindex])

if [comp1[0], comp1[1], comp1[2]] == [comp2[0], comp2[1], comp2[2]]:

if newLoopindex not in listOfDoneLines: # Only needs to be added once
listOfDoneLines.append(newLoopindex)

if newLoopindex not in thisPassDoneLines: # Only needs to be added
once
thisPassDoneLines.append(newLoopindex)
thisPassDoneNormals.append([eval(comp1[8]), eval(comp1[9]),
eval(comp1[10])])

listOfDoneLines.append(newCompareLoopindex)
thisPassDoneLines.append(newCompareLoopindex)
thisPassDoneNormals.append([eval(comp2[8]), eval(comp2[9]),
eval(comp2[10])])
print ''#'',

newCompareLoopindex += 1

if len(thisPassDoneLines) > 1: # Ok We have some verts to smooth.
# This loops through all verts and assigns each a new normal.
for tempLineIndex in thisPassDoneLines:

tempSplitLine = string.split(fileLineList[tempLineIndex])

# We add to these for every vert that is similar, then devide them
to get an average.
NormX = 0
NormY = 0
NormZ = 0

# A list of vert line indicies that we will create to store verts
that have normals close to ours.
thisVertFrendsCount = 0

# This compares the current vert with all the others, if they are
close then add to vertFrends.
for tNorm in thisPassDoneNormals: # tNorm is just used for one of
the normals in the thisPassDoneNormals

if abs(eval(tempSplitLine[8]) - tNorm[0]) +
abs(eval(tempSplitLine[9]) - tNorm[1]) + abs(eval(tempSplitLine[10])
-tNorm[2])< maxDiff:

#maxDiff
NormX += tNorm[0]
NormY += tNorm[1]
NormZ += tNorm[2]

thisVertFrendsCount += 1
#Now devide the normals by the number of frends.
NormX /= thisVertFrendsCount
NormY /= thisVertFrendsCount
NormZ /= thisVertFrendsCount

# make unit length vector.
d = NormX*NormX + NormY*NormY + NormZ*NormZ
if d>0:
d = math.sqrt(d)
NormX/=d; NormY/=d; NormZ/=d
# Write the normal to the current line
tempSplitLine[8] = str(saneFloat(NormX))
tempSplitLine[9] = str(saneFloat(NormY))
tempSplitLine[10] = str(saneFloat(NormZ))

fileLineList[tempLineIndex] = string.join(tempSplitLine) + ''\n''

newLoopindex += 1

lineIndex += 1
# Writing to file
# file to write
file = open(sys.argv[-1], ''w'')
file.writelines(fileLineList)
file.close()






"Ideasman" <cp******@pacific.net.au> wrote in message
news:3f***********************@freenews.iinet.net. au...
Hi I have a made a script that process normals for a flat shaded 3D mesh''s. It compares every vert with every other vert to look for verts that can share normals and It takes ages.



I wonder if there is not some way to sort or classify verts so you do
not have to do a complete nxn comparison.

TJR


Hi.

First, it might be helpful to see a sample of the contents of the file your
pulling your information from. What do the lines look like? What information
do they contain? Does each line contain the same type of information? What
are the parts of the information contained in each line, and how do they
relate to the problem you''re trying to solve? How many ''verts'' (vertices?)
are there?

Describe the algorithm your applying in english. What are the steps you''re
taking in order to solve this problem? They are not apparent from this code.
Can you not use some functions to break this mass of code into manageable
chunks?

To be honest, I have no idea what your code is trying to do. You say it does
this:

"Ideasman" <cp******@pacific.net.au> wrote in message
news:3f***********************@freenews.iinet.net. au...
Hi I have a made a script that process normals for a flat shaded 3D mesh''s. It compares every vert with every other vert to look for verts that can
share normals and It takes ages.



OK. I''m going to assume that each line in the file represents the
information relevant to one ''vert'' (whatever that is). So, I would suggest,
going through the file, one line at a time, constructing a list of vert
objects, which hold this information. Something like:

fd = file(filename)
verts = [Vert(*line.split()) for line in fd]
fd.close()

where Vert() is a constructor for the class Vert:

class Vert(object):
def __init__(self, what, ever, information,
will , be, split, from, the, line):
self.what = what
...

def share_normals(self, other):
"returns True when self and other can share normals"
...
...
So, now you have a list of Verts. If you don''t have the ''normals'' for these
Verts from the file, and have to calculate them,
add a ''normalize(self)'' method to the Vert class, and call that inside
__init__() at the appropriate time.

So, now we have a list of normalized Verts. And we want to compare every
Vert to each other to look for Verts that can share normals.
OK. Suppose

verts = [v0, v1, v2, v3] # where v0, v1, v2, v3 are all Vert instances.

We want to see if v0 can share normals with v1, v2, and/or v3;
and if v1 can share normals with v0, v2, and/or v3; and so on.

If I check v0.share_normals(v1), do I need to check v1.share_normals(v0)?
Probably not. But then again, I haven''t a clue what I''m talking about :)

Assuming I''m correct, and we don''t need to check v1.share_normals(v0), then
we should probably keep a record. I''ll assume that a vert can share a normal
with itself, so we don''t need to keep a record for that.

I suggest making a jagged array. Call it ''SHARED''. Build it as follows:
SHARED = [[vert.share_normal(other) for other in verts[i+1:]]
for i, vert in enumerate(verts[:-1])]

And, when we''re done, ''SHARED'' is a jagged array filled with zeroes and
ones, something like this:
0 1 2 3
0 - [[1, 0, 1],
1 - - [ 0, 1],
2 - - - [0]]
3 - - - -

# verts indices are listed on the outside of the table
SHARED == [ [1, 0, 1], [0, 1], [0] ]

then we make a function like:

def is_shared(index1, index2):
if index1 == index2: return True # a vert shares a normal with itself
# adjust for jagged list indexing
if index1 > index2:
index1, index2 = index2, index1
index2 -= index1 + 1
return SHARED[index1][index2]

so we can use it to ask, later, if v0, and v3 share normals by using

''is_shared(0,3)'' or ''is_shared(3,0)'' # SHARED[0][2] == True

and then we can do whatever action is appropriate using verts[0] and
verts[3].

By storing this information, we avoid having to re-calculate whether v0 and
v3 share normals every time we need to know that. And, by storing the
information in a jagged array, we save some space.

# NOTE: if v0.share_normal(v1) and v1.share_normal(v2),
# then v0.share_normal(v2) must also be True.
# so we could make SHARED a dictionary instead

SHARED = {0: [0,1,3], # v0 shares normals with itself, v1 and v3
1: [0,1,3],
2: [2]
3: [0,1,3]}

Building this dictionary may save some share_normal() calls:

SHARED = {}
for i, vert in enumerate(verts):
SHARED[i] = [i]
for j, other in enumerate(verts):
if j == i: continue
if vert.share_normal(other):
if j < i:
# the earlier vert will have accumulated all
# of this verts share partners, so just use those
SHARED[i] = SHARED[j]
break
else:
SHARED[i].append(j)
Then, to see whether v0.share_normal(v3), we can just ask

3 in SHARED[0] # and vice versa

But, this takes up more space than the jagged list version. And the look-up
is slower.

What might be better is a straight partition, like this:

SHARED = [[0,1,3], [2]] # I''ll leave how to get this up to you...

This takes up less space than each of the previous versions.
How do we use it to tell whether v0.share_normal(v3) ?

A new version of is_shared(index1, index2) would have to
find the tuple containing index1, then see if index2 is also
in that tuple. This look-up is the slowest of all!
And, after all of that, I still can''t say whether this has been of use for
you, because
I have very little understanding as to what you meant for your code to be
doing.

Hopefully, this was somewhat near the mark. Also, be aware that none of the
code
above has been tested.

Good luck with what you''re doing,
Sean