Skip to content
Snippets Groups Projects
Commit 5bd6ce00 authored by Robert Butora's avatar Robert Butora
Browse files

fixes conflict in README before merge

parents e93de058 9b5bd30f
No related branches found
No related tags found
No related merge requests found
FROM registry.fedoraproject.org/fedora:latest
RUN dnf -y update &&\
dnf -y install httpd python3-mod_wsgi pip &&\
dnf -y install systemd httpd python3-mod_wsgi pip ps hostname &&\
dnf clean all &&\
pip install pandas tables Pyro4
pip install pandas tables scipy Pyro4 &&\
mkdir -p /var/www/wsgi-scripts /srv/sed-data
COPY wsgi.conf /etc/httpd/conf.d/
COPY ./wsgi-scripts/*.py /var/www/wsgi-scripts/
COPY ./start-sed.sh /root
EXPOSE 80
ENTRYPOINT /usr/sbin/httpd -DFOREGROUND
CMD ["/root/start-sed.sh"]
#ENTRYPOINT ["/usr/sbin/httpd","-DFOREGROUND"]
#ENTRYPOINT /bin/bash
Makefile 0 → 100644
.PHONY: build
build:
docker build -t sedmods -f Dockerfile .
# --name container's name (aso edded to /etc/hosts)
# -d --detach run in background
# -t --tty allocate pseudo-tty terminal
# --rm automatically remove resources at exit
.PHONY: run
run:
docker run --detach --tty \
--name sedmod-test \
--rm \
-v /srv/sed-data:/srv/sed-data:ro \
-p 9090:80 sedmods
@echo ${PWD}
# -v ${PWD}/new_dataset:/sed-data:ro \
# -v ${PWD}/wsgi-scripts:/var/www/wsgi-scripts:rw \
.PHONY: exec-bash
exec-bash:
docker exec -it sedmod-test bash
# docker login git.ia2.inaf.it:5050 -u robert.butora
# pwd szokasas regi: C*n
.PHONY: publish-localy
publish-localy:
docker tag sedmods git.ia2.inaf.it:5050/vialactea/vlkb-sedmods/sedmods:0.1.2
docker push git.ia2.inaf.it:5050/vialactea/vlkb-sedmods/sedmods:0.1.2
docker image remove git.ia2.inaf.it:5050/vialactea/vlkb-sedmods/sedmods:0.1.2
# with podman
build-podman:
podman build --tag vlkb-sedmods -f ./Dockerfile
run-podman:
podman run -dt \
--name sedmod-test \
--rm \
-v ${PWD}/sed-data:/sed-data:z \
-v ${PWD}/wsgi-scripts:/var/www/wsgi-scripts:z \
-p 8080:80 vlkb-sedmods
......@@ -11,99 +11,68 @@ container to help maintenance.
## Deploy SEDModS
### Pull repo
When pulling this repo you'll end up with
|-- Dockerfile (container description)
|-- README.md (this text)
|-- sed-data
| `-- link.to.hdf5 (dummy data file, see later)
|-- wsgi.conf (WSGI configuration for the http server)
`-- wsgi-scripts (actual service business logic)
|-- hdf_query.py
|-- query-server_d.py
|-- wsgid.py
`-- wsgi.py
### Project content description
The `Dockerfile` cotains instructions to build the container that
includes:
The `Dockerfile` depends on:
- an httpd server (running on container's port 80)
- python and WSGI packages to be able to run the service
- python and WSGI packages
It needs a properly configured `wsgi.conf` that will be loaded within
the `http/conf.d` of the container.
The base container is a _fedora:latest_ environment.
Besides the instructions to build the container and the WSGI
configuration, the container, to properly run, needs two further pieces,
two folders:
- the `sed-data` one that contains the SED Models HDF5 file
- the `wsgi-scripts` that actually contains the business logic of the
service itself
(The actual HDF5 file is not kept in this repo, because it's too large
and because it's not a wise choice to store byte-blobs in a versioning
system).
The `wsgi-scripts` is provided and maintained in this repo, the
`sed-data` one can be anywhere else, in reach of the host system.
If the location of these these pieces has to be changed, it suffices to
update accordingly the run command of the container (see below).
### Build the container image
To build the container image, simply run
### Build and run the container image
To build the container image:
podman build --tag vlkb-sedmods -f ./Dockerfile
The tag can be one of your choice. It is suggested to have a dedicated
user to run this in production.
It is suggested to have a dedicated user to run in production.
The service expects the SED models in **/srv/sed-data/sim_total.dat**.
### Run the container
Once the podman image is ready and the two directories are in place, to
run the container it suffices a command like
Start the service:
podman run -dt \
--name sedmod-test \
--rm \
-v $PWD/sed-data:/sed-data:z \
-v $PWD/sed-data:/srv/sed-data:z \
-v $PWD/wsgi-scripts:/var/www/wsgi-scripts:z \
-p 8080:80 vlkb-sedmods
where the name is optional and of your choice and the left-hand side of
the _:_ in the _-v_ arguments must point to the actual location of the
two folders to be mounted in the container.
In the example the _-v_ arguments are considered such as the command is
In this example the _-v_ arguments are considered such as the command is
run from the local working copy and the HDF5 SED Models file is actually
within the `sed-data` folder.
Also, _-p_ maps the 80 port of the container onto the 8080 port on the
host server, this must be changed if the host's 8080 is already in use.
### Service(s) in the container
Kubernetes manifests are available in [vlkb-k8s](https://ict.inaf.it/gitlab/ViaLactea/vlkb-k8s) project.
Two flavours of the _SEDModS_ service can run with the repo content:
- one that reads the HDF5 at each query
- one that reads the HDF5 once and works as a _system daemon_
### Service endpoints
#### Single query service mode
This mode is directly available when the cotainer is run. It uses the
following code files:
The service implementation is descripbed [here](README_implementation.md).
wsgi.py
hdf_query.py
The service presents the **/searchd** endpoint. Arguments are separated
by underscore and their meaning is indicated in this C++ code snipped:
If you run on the host server
```cpp
QString args = QString("'%1_%2_%3_%4_%5_%6_%7_0_%8_%9'")
.arg(sedFitInputW)
.arg(sedFitInputF)
.arg(sedFitInputErrF)
.arg(sedFitInputFflag)
.arg(ui->distTheoLineEdit->text())
.arg(ui->prefilterLineEdit->text())
.arg(sedWeights)
.arg(sedFitInputW)
.arg(ui->delta_chi2_lineEdit->text());
```
curl localhost:8080/sedsearch/?'clump_mass<10.005' > output.dat
Access the service by:
you should get the response in the `output.dat` file, or you can point
the browser to
curl localhost:8080/searchd/?arg0_arg1_...-_arg9 > output.json
http://host.server:8080/sedsearch/?'clump_mass<10.005'
Response is in JSON format.
and see the response directly.
<<<<<<< HEAD
#### Daemon service mode
This mode uses the code in:
......@@ -155,3 +124,5 @@ Within the container (i.e. provided in the build):
- pandas
- Pyro4 (deamon mode)
- (py)tables
=======
>>>>>>> modifs-on-vlkb
### Service(s) in the container
Two flavours of the _SEDModS_ service can run with the repo content:
- one that reads the HDF5 at each query
- one that reads the HDF5 once and works as a _system daemon_
#### Single query service mode
This mode is directly available when the cotainer is run. It uses the
following code files:
wsgi.py
hdf_query.py
If you run on the host server
curl localhost:8080/sedsearch/?'clump_mass<10.005' > output.dat
you should get the response in the `output.dat` file, or you can point
the browser to
http://host.server:8080/sedsearch/?'clump_mass<10.005'
and see the response directly.
#### Daemon service mode
This mode uses the code in:
wsgid.py
query_server_d.py
It requires a couple of processes to run before the deamonised service
can work. These processes run within the container, so, after running
it, one can launch them attaching to the running container with
podman exec -it sedmod-test /bin/bash
and within it run
python -m Pyro4.naming &
python query-server_d.py &
After that, on can exit the shell and the daemon-based service should be
reachable at
http://host.server:8080/seddaemon
with the same usage of the single query one.
## SED Models HDF5 file
This is preserved, currently, on the INAF ICT Owncloud instance.
### Network Proxy
The service can be made visible on a specific context path in the host
server's http using the httpd _ProxyPass_ directive, like
<Location "/sedmods">
ProxyPass "http://localhost:8080"
</Location>
where _/sedmods_ is an example and the _8080_ port depends on the passed
parameters to the podman run command (see above).
## Dependencies
On the host:
- podman
- httpd
Within the container (i.e. provided in the build):
- httpd with python3-mod\_wsgi
- python 3.x
- pandas
- Pyro4 (deamon mode)
- (py)tables
#!/bin/bash
set -eux
{
date
env
python3 -m Pyro4.naming &
python3 /var/www/wsgi-scripts/query-server_d.py &
date
} 1> /tmp/start-sed.log 2>&1
exec /usr/sbin/httpd -DFOREGROUND
#!/usr/bin/env python3
from urllib.parse import unquote
import pandas as pd
import time
def query_out(parameters):
t1=time.time()
parsequery=parameters.replace(' ', '')
query1=parsequery.replace('%27', '')
query_final=unquote(query1)
table=pd.read_hdf('/sed-data/sedmodels.h5')
myquery=table.query(query_final)
dataset=pd.read_csv('/srv/sed-data/sim_total.dat', sep =' ')
#dataset=pd.read_csv('/var/www/wsgi-scripts/sim_total.dat', sep =' ')
myquery=dataset.query(query_final)
t2=time.time()
print(t2-t1)
return myquery
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Fri Dec 17 14:32:22 2021
@author: smordini
useful functions for sed fitting
"""
import numpy as np
from scipy import interpolate
import math
def match(list_a,list_b):
matched_valuse=list(set(list_a) & set(list_b))
pos_a=[list_a.index(val) for val in matched_valuse]
pos_b=[list_b.index(val) for val in matched_valuse]
return(pos_a,pos_b)
def tostring(in_str):
if type(in_str)!=str:
in_str=str(in_str)
out_str=in_str.strip()
return out_str
def pad0_num(num):
if type(num)==str:
try:
float(num)
except:
print('The string must be a number. Retry')
if type(num)!=str:
num=str(num)
q=num.find('.')
if q<=len(num)-2 and q!=-1:
num0=num+'0'
else:
num0=num
return(num0)
def remove_char(in_str,ch):
if type(in_str)!=str:
in_str=str(in_str)
q=in_str.find(ch)
if q==-1:
out_str=in_str
else:
out_str=in_str.replace(ch, '')
return out_str
def planck(wavelength, temp):
try:
len(wavelength)
except:
wavelength=[wavelength]
try:
len(temp)
except:
pass
else:
print('Only one temperature allowed, not list')
return
if len(wavelength)<1:
print('Enter at least one wavelength in Angstrom.')
return
wave=[ wl*1e-8 for wl in wavelength]
c1 = 3.7417749e-5 #2*pi*h*c*c with constatns in cgs units (TO BE CHECKED?)
c2 = 1.4387687 # h*c/k
value = [c2/wl/temp for wl in wave]
# print(value)
bbflux=[]
test_precision=math.log(np.finfo(float).max)
# print(test_precision)
for val,wl in zip (value, wave):
if val<test_precision:
flux=c1/(wl**5*(math.expm1(val)))
# print(flux)
bbflux.append(flux*1e-8)
else:
bbflux.append(0)
if len(bbflux)==1:
bbflux=bbflux[0]
return bbflux
def lbol(wavelength,flux,dist):
#function lbol,w,f,d
# interpolation between data points is done in logarithmic space to allow
# straight lines (the SED is like that) in the log-log plot. interpolation
# on a finer grid is done and then data are transformed back in linear space
# where the trapezoidal interpolation is then done
# w is lambda in um
# f is flux jy
# d is dist in parsec
#convert flux to W/cm2/um
fw=[1.e-15*f*.2997/(w**2.) for f,w in zip(flux,wavelength)]
lw=[np.log10(w) for w in wavelength]
lfw=[np.log10(f) for f in fw]
#1000 points resampling
lw1=(np.arange(1000)*((max(lw)-min(lw))/1000.))+min(lw)
interpfunc = interpolate.interp1d(lw,lfw, kind='linear')
lfw1=interpfunc(lw1)
w1=[10**l for l in lw1]
fw1=[10**l for l in lfw1]
jy=[f/1.e-15/.3*(w**2.) for f,w in zip (fw1,w1)]
# ;integrate over whole range
fint=0.
# for i=0,n_elements(w1)-2 do fint=fint+((fw1(i)+fw1(i+1))*(w1(i+1)-w1(i))/2.):
for i in range(len(w1)-2):
fint=fint+((fw1[i]+fw1[(i+1)])*(w1[(i+1)]-w1[(i)])/2.)
# ;fint=int_tabulated(w,fw,/double,/sort)
# ; integrate longword of 350um
# ;qc0=where(w ge 350.)
# ;fc0=int_tabulated(w(qc0),fw(qc0))
# ; compute lbol
# l=fint*4.*math.pi*d*3.e18/3.8e26*d*3.e18 ;lsol
lum=fint*4*np.pi*dist*3.086e18/3.827e26*dist*3.086e18 #lsol
# ;c0ratio=fc0/fint
# ;print,'Lsubmm/Lbol = ',c0ratio
return lum
def wheretomulti(array, indices):
s=array.shape
# print(s)
NCol=s[1]
Col=indices % NCol
if len(s)==2:
Row=indices/NCol
return (Col, Row)
elif len(s)==3:
NRow = s[2]
Row = ( indices / NCol ) % NRow
Frame = indices / ( NRow * NCol )
return(Col, Row, Frame)
else:
Col=0
Row=0
Frame=0
print('WhereToMulti called with bad input. Array not a vector or matrix.')
return(Col, Row, Frame)
return
......@@ -5,36 +5,355 @@ Created on Fri Mar 4 15:06:40 2022
@author: smordini
"""
import Pyro4
import Pyro4.util
import socket
from urllib.parse import unquote
import pandas as pd
import logging
from idl_to_py_lib import *
import numpy as np
import sys
sys.excepthook = Pyro4.util.excepthook
root_logger= logging.getLogger()
root_logger.setLevel(logging.DEBUG) # or whatever
handler = logging.FileHandler('/var/log/httpd/sedfit_error.log', 'a', 'utf-8') # or whatever
#handler = logging.FileHandler('/home/sedmods/vlkb-sedmods/sedfit_error.log', 'a', 'utf-8') # or whatever
formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(message)s') # or whatever
handler.setFormatter(formatter) # Pass handler as a parameter, not assign
root_logger.addHandler(handler)
#logging.basicConfig(filename='/home/sedmods/vlkb-sedmods/sedfit_error.log', encoding='utf-8', level=logging.DEBUG, format='%(asctime)s %(levelname)-8s %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
#if socket.gethostname().find('.')>=0:
hostname=socket.gethostname()
#else:
# hostname=socket.gethostbyaddr(socket.gethostname())[0] + ".local"
@Pyro4.expose
class QueryMaker(object):
dataset=pd.read_hdf('/sed-data/sedmodels.h5')
dataset=pd.read_csv('/srv/sed-data/sim_total.dat', sep =' ')
#dataset=pd.read_csv('/var/www/wsgi-scripts/sim_total.dat', sep =' ')
def query_out(self, parameters):
dataset=pd.read_hdf('/sed-data/sedmodels.h5')
parsequery=parameters.replace(' ', '')
query1=parsequery.replace('%27', '')
query_final=unquote(query1)
myquery=QueryMaker.dataset.query(query1)
try:
parsequery=parameters.replace(' ', '')
query1=parsequery.replace('%27', '')
query_final=unquote(query1)
query_log=query_final.replace('_', ' ')
query_list=query_final.split('_')
if len(query_list)==0:
output='Deamon service running correctly'
return output
w_in=eval(query_list[0])
f_in=eval(query_list[1])
df_in=eval(query_list[2])
fflag_in=eval(query_list[3])
distance=eval(query_list[4])
prefilter_thresh=eval(query_list[5])
sed_weights=eval(query_list[6])
local=eval(query_list[7])
use_wl=eval(query_list[8])
delta_chi2=eval(query_list[9])
except:
logging.exception('Error occurred in reading/importing function parameters')
logging.info(query_log)
if distance<0:
logging.error('Negative value for distance; program interrupted')
mystr='The distance is set to a negative value. Please provide a positive value.'
return mystr
phys_par=[tt.upper() for tt in ['clump_mass','compact_mass_fraction','clump_upper_age','dust_temp','bolometric_luminosity','random_sample','n_star_tot','m_star_tot','n_star_zams','m_star_zams','l_star_tot','l_star_zams','zams_luminosity_1','zams_mass_1','zams_temperature_1','zams_luminosity_2','zams_mass_2','zams_temperature_2','zams_luminosity_3','zams_mass_3','zams_temperature_3']]
jy2mjy=1000.
d_ref=1000.
ref_wave=[3.4,3.6,4.5,4.6,5.8,8.0,12.,22.,24.0,70.,100.,160.,250.,350.,500.,870.,1100.]
col_names=['WISE1','I1','I2','WISE2','I3','I4','WISE3','WISE4','M1','PACS1','PACS2','PACS3','SPIR1','SPIR2','SPIR3','LABOC','BOL11']
fac_resc=(distance/d_ref)**2.
delta=1-(prefilter_thresh)
q12,q21=match(w_in, ref_wave)
w=[w_in[i] for i in q12]
f=[f_in[i] for i in q12]
d=[df_in[i] for i in q12]
ff=[fflag_in[i] for i in q12]
wwll=[i for i in w]
w.sort()
wl=[]
fl=[]
df=[]
fflag=[]
use_wave=[]
for ww in w:
q=wwll.index(ww)
wl.append(wwll[q])
fl.append(f[q])
df.append(d[q])
fflag.append(ff[q])
use_wave.append(use_wl[q])
par_str=''
par_str_arr=[]
ret_par_str=''
phys_str=''
phys_par_arr=[]
ret_phys_par=''
for t in range(len(ref_wave)):
for k in wl:
if ref_wave[t]-0.05<k and ref_wave[t]+0.05>k:
par_str=par_str+col_names[t]+','
ret_par_str=ret_par_str+col_names[t]+','
par_str_arr.append(col_names[t].lower())
par_str=par_str[:-1].upper()
ret_par_str=ret_par_str[:-1].lower()
for k in range(len(phys_par)):
phys_str=phys_str+phys_par[k]+','
ret_phys_par=ret_phys_par+phys_par[k]+','
phys_par_arr.append(phys_par[k].lower())
phys_str=phys_str[:-1].upper()
ret_phys_par=ret_phys_par[:-1].lower()
if use_wave!=0:
query=""
for bb, bb_count in zip(use_wave, range(len(use_wave))):
if bb in wl:
qband=wl.index(bb)
else:
logging.error('Reference wavelength required: '+str(wl.index(bb))+' not found in data file; wavelength excluded from fit procedure')
continue
if bb in ref_wave:
qrefband=ref_wave.index(bb)
else:
qrefband=-1
qqueryband,qdummy=match(ref_wave, wl)
ul_str=''
if 0 in fflag:
qulband=[i for i,e in enumerate(fflag) if e==0 ]
nqulband=fflag.count(0)
ul_str=' and '
for t in range(nqulband):
ul_str=ul_str+'('+col_names(qqueryband(qulband(t)))+"<'"+tostring(fl(qulband(t))*jy2mjy*fac_resc)+"') and "
if fflag[qband]==1:
ul_str=ul_str[0:len(ul_str)-4]
if fflag[qband]==0:
ul_str=ul_str[4:len(ul_str)-4]
nreq_par=1+len(phys_par_arr)+len(par_str_arr)
if fflag[qband]==1:
query=query+(str(remove_char(pad0_num(tostring(float(fl[qband]*jy2mjy*fac_resc*(1-(delta**2.))))),'+')+"<"+col_names[qrefband]+"<"+remove_char(pad0_num(tostring(float(fl[qband]*jy2mjy*fac_resc*(1+(delta**2.))))),'+')+' or ') )
if fflag[qband]==0:
query=query+(str(col_names[qrefband]+"<"+remove_char(pad0_num(tostring(float(fl[qband]*jy2mjy*fac_resc))),'+'))+' or ')#fluxes are mutliplied by 1000 because model fluxes are in milliJy
query_final=query[:-4]
try:
dmodels=QueryMaker.dataset.query(query_final)
except:
logging.exception('Error occurred while querying dataset with band intervals')
else:
#compute object luminosity from observed SED
lum=lbol(wl,fl,distance)
#rescaling factor has to stay at 1 if luminosity is used-----WHYYYY ??????
fac_resc=1.
query_fianl=remove_char(pad0_num(tostring(float(lum*fac_resc*(1-(delta**2.))))),'+')+"< bolometric_luminosity <"+remove_char(pad0_num(tostring(float(lum*fac_resc*(1+(delta**2.))))),'+')
try:
dmodels=QueryMaker.dataset.query(query_final)
except:
logging.exception('Error occurred while querying dataset with bolometricl uminosity interval')
nlines=len(dmodels)
if nlines<1:
logging.info('Model selection not performed. Program interrupted')
output='Model selection not performed. Retry with different paramenters'
return output
else:
logging.info('Model selection completed, obtained '+str(nlines)+' models.')
n_sel=len(dmodels['cluster_id'])
test1=myquery.to_json(orient='split')
flag=[int(ff) for ff in fflag]
ul_flag=np.zeros(len(flag))
ll_flag=np.zeros(len(flag))
dyd=np.zeros(len(flag))
if sed_weights==0:
sed_weights=[3./7.,1./7.,3./7.]
renorm=sum(sed_weights)
w1=np.sqrt(sed_weights[0]/renorm)
w2=np.sqrt(sed_weights[1]/renorm)
w3=np.sqrt(sed_weights[2]/renorm)
qmir=[]
qfir=[]
qmm=[]
for i in range(len(wl)):
if wl[i]<25:
qmir.append(i)
if wl[i]>=25 and wl[i]<=250:
qfir.append(i)
if wl[i]>250:
qmm.append(i)
if len(qmir)>0:
q1=[]
q1neg=[]
for qq in qmir:
if flag[qq]==1:
q1.append(qq)
else:
q1neg.append(qq)
nq1neg=len(q1neg)
nq1=len(q1)
if nq1>0:
for qq in qmir:
dyd[qq]=np.sqrt(nq1)/w1
if nq1neg>0:
for qq in q1neg:
dyd[qq]=9999. #i.e. it's an upper/lower limit
ul_flag[qq]=1
if len(qfir)>0:
q2=[]
q2neg=[]
for qq in qfir:
if flag[qq]==1:
q2.append(qq)
else:
q2neg.append(qq)
nq2neg=len(q2neg)
nq2=len(q2)
if nq2>0:
for qq in qfir:
dyd[qq]=np.sqrt(nq2)/w2
if nq2neg>0:
for qq in q2neg:
dyd[qq]=9999. #i.e. it's an upper/lower limit
ul_flag[qq]=1
if len(qmm)>0:
q3=[]
q3neg=[]
for qq in qmm:
if flag[qq]==1:
q3.append(qq)
else:
q3neg.append(qq)
nq3neg=len(q3neg)
nq3=len(q3)
if nq3>0:
for qq in qmm:
dyd[qq]=np.sqrt(nq3)/w3
if nq3neg>0:
for qq in q3neg:
dyd[qq]=9999. #i.e. it's an upper/lower limit
ul_flag[qq]=1
good_flag=[1-qq for qq in ul_flag]
dyd=[dd/min(dyd) for dd in dyd]
dyd=[dd*ff/ll for dd,ff,ll in zip( dyd, df,fl)]
dyd=[dd*ff for dd,ff in zip(dyd, fl)]
nstep=10
dist_arr=np.arange(10)
dist_arr=[distance*(1-(delta**2.))+dd*(distance*(1+(delta**2.))-distance*(1-(delta**2.)))/nstep for dd in dist_arr]
nw=len(wl)
invalid_chi2=-999
matrix_models=np.zeros([n_sel,nstep,nw])
matrix_chi2=np.zeros([n_sel,nstep])
matrix_chi2[:]=invalid_chi2
matrix_fluxes=np.zeros([n_sel,nstep,nw])
matrix_dfluxes=np.zeros([n_sel,nstep,nw])
for i in range(nstep):
for k in range(nw):
matrix_models[:,i,k]=dmodels[par_str_arr[k].upper()]/1000*((d_ref/dist_arr[i])**2.)
for k in range(nw):
matrix_fluxes[:,:,k]=fl[k]
matrix_dfluxes[:,:,k]=dyd[k]
dmat=np.zeros([n_sel,nstep,nw])
for j in range(nstep):
for k in range(nw):
dmat[:,j,k]=((matrix_models[:,j,k]-matrix_fluxes[:,j,k])**2)/(matrix_dfluxes[:,j,k]**2)
matrix_chi2=np.sum(dmat, 2)
if delta_chi2!=0:
dchi2=delta_chi2
else:
dchi2=1
chi2min=np.min(matrix_chi2)
qchi2=np.argwhere(matrix_chi2<=chi2min+dchi2)
nqchi2=len(qchi2)
par={'cluster_id':[], 'clump_mass':[], 'compact_mass_fraction':[], 'clump_upper_age':[], 'dust_temp':[], 'bolometric_luminosity':[], 'random_sample':[], 'n_star_tot':[], 'm_star_tot':[], 'n_star_zams':[], 'm_star_zams':[], 'l_star_tot':[], 'l_star_zams':[], 'zams_luminosity_1':[], 'zams_mass_1':[], 'zams_temperature_1':[], 'zams_luminosity_2':[], 'zams_mass_2':[], 'zams_temperature_2':[], 'zams_luminosity_3':[], 'zams_mass_3':[], 'zams_temperature_3':[], 'd':[], 'chi2':[], 'wmod':[], 'fmod':[]}
logging.info('Selected '+str(nqchi2)+' models befor dubplicate removal.')
for i in range(nqchi2):
if dmodels['cluster_id'].iloc[qchi2[i][0]] not in par['cluster_id']:
par['cluster_id'].append(dmodels['cluster_id'].iloc[qchi2[i][0]])
par['clump_mass'].append(dmodels['clump_mass'].iloc[qchi2[i][0]])
par['compact_mass_fraction'].append(dmodels['compact_mass_fraction'].iloc[qchi2[i][0]])
par['clump_upper_age'].append(dmodels['clump_upper_age'].iloc[qchi2[i][0]])
par['bolometric_luminosity'].append(dmodels['bolometric_luminosity'].iloc[qchi2[i][0]])
par['random_sample'].append(dmodels['random_sample'].iloc[qchi2[i][0]])
par['dust_temp'].append(dmodels['dust_temp'].iloc[qchi2[i][0]])
par['n_star_tot'].append(dmodels['n_star_tot'].iloc[qchi2[i][0]])
par['m_star_tot'].append(dmodels['m_star_tot'].iloc[qchi2[i][0]])
par['n_star_zams'].append(dmodels['n_star_zams'].iloc[qchi2[i][0]])
par['m_star_zams'].append(dmodels['m_star_zams'].iloc[qchi2[i][0]])
par['l_star_tot'].append(dmodels['l_star_tot'].iloc[qchi2[i][0]])
par['l_star_zams'].append(dmodels['l_star_zams'].iloc[qchi2[i][0]])
par['zams_luminosity_1'].append(dmodels['zams_luminosity_1'].iloc[qchi2[i][0]])
par['zams_mass_1'].append(dmodels['zams_mass_1'].iloc[qchi2[i][0]])
par['zams_temperature_1'].append(dmodels['zams_temperature_1'].iloc[qchi2[i][0]])
par['zams_luminosity_2'].append(dmodels['zams_luminosity_2'].iloc[qchi2[i][0]])
par['zams_mass_2'].append(dmodels['zams_mass_2'].iloc[qchi2[i][0]])
par['zams_temperature_2'].append(dmodels['zams_temperature_2'].iloc[qchi2[i][0]])
par['zams_luminosity_3'].append(dmodels['zams_luminosity_3'].iloc[qchi2[i][0]])
par['zams_mass_3'].append(dmodels['zams_mass_3'].iloc[qchi2[i][0]])
par['zams_temperature_3'].append(dmodels['zams_temperature_3'].iloc[qchi2[i][0]])
par['d'].append(dist_arr[qchi2[i][1]])
final_dist=dist_arr[qchi2[i][1]]
par['chi2'].append(matrix_chi2[tuple(qchi2[i])])
par['wmod'].append(list([3.4,3.6,4.5,4.6,5.8,8.0,12.,22.,24.0,70.,100.,160.,250.,350.,500.,870.,1100.]))
fluxes=[]
for ff in col_names:
myflux=dmodels[ff].iloc[qchi2[i][0]]/1000*((d_ref/final_dist)**2)
fluxes.append(myflux)
par['fmod'].append(list(fluxes))
else:
j=par['cluster_id'].index(dmodels['cluster_id'].iloc[qchi2[i][0]])
if par['chi2'][j]>matrix_chi2[tuple(qchi2[i])]:
par['cluster_id'][j]=dmodels['cluster_id'].iloc[qchi2[i][0]]
par['clump_mass'][j]=dmodels['clump_mass'].iloc[qchi2[i][0]]
par['compact_mass_fraction'][j]=dmodels['compact_mass_fraction'].iloc[qchi2[i][0]]
par['clump_upper_age'][j]=dmodels['clump_upper_age'].iloc[qchi2[i][0]]
par['bolometric_luminosity'][j]=dmodels['bolometric_luminosity'].iloc[qchi2[i][0]]
par['random_sample'][j]=dmodels['random_sample'].iloc[qchi2[i][0]]
par['dust_temp'][j]=dmodels['dust_temp'].iloc[qchi2[i][0]]
par['n_star_tot'][j]=dmodels['n_star_tot'].iloc[qchi2[i][0]]
par['m_star_tot'][j]=dmodels['m_star_tot'].iloc[qchi2[i][0]]
par['n_star_zams'][j]=dmodels['n_star_zams'].iloc[qchi2[i][0]]
par['m_star_zams'][j]=dmodels['m_star_zams'].iloc[qchi2[i][0]]
par['l_star_tot'][j]=dmodels['l_star_tot'].iloc[qchi2[i][0]]
par['l_star_zams'][j]=dmodels['l_star_zams'].iloc[qchi2[i][0]]
par['zams_luminosity_1'][j]=dmodels['zams_luminosity_1'].iloc[qchi2[i][0]]
par['zams_mass_1'][j]=dmodels['zams_mass_1'].iloc[qchi2[i][0]]
par['zams_temperature_1'][j]=dmodels['zams_temperature_1'].iloc[qchi2[i][0]]
par['zams_luminosity_2'][j]=dmodels['zams_luminosity_2'].iloc[qchi2[i][0]]
par['zams_mass_2'][j]=dmodels['zams_mass_2'].iloc[qchi2[i][0]]
par['zams_temperature_2'][j]=dmodels['zams_temperature_2'].iloc[qchi2[i][0]]
par['zams_luminosity_3'][j]=dmodels['zams_luminosity_3'].iloc[qchi2[i][0]]
par['zams_mass_3'][j]=dmodels['zams_mass_3'].iloc[qchi2[i][0]]
par['zams_temperature_3'][j]=dmodels['zams_temperature_3'].iloc[qchi2[i][0]]
par['d'][j]=dist_arr[qchi2[i][1]]
final_dist=dist_arr[qchi2[i][1]]
par['chi2'][j]=matrix_chi2[tuple(qchi2[i])]
par['wmod'][j]=list([3.4,3.6,4.5,4.6,5.8,8.0,12.,22.,24.0,70.,100.,160.,250.,350.,500.,870.,1100.])
fluxes=[]
for ff in col_names:
myflux=dmodels[ff].iloc[qchi2[i][0]]/1000*((d_ref/final_dist)**2)
fluxes.append(myflux)
par['fmod'][j]=(list(fluxes))
logging.info('Sedfit procedure completed. Obtained '+ str(len(par['cluster_id']))+' models after duplicate removal.')
pd_dict=pd.DataFrame.from_dict(par)
test1=pd_dict.to_json(orient='split')
test2=str(test1)
output=bytes(test2,'utf-8')
return output
logging.info('Procedur completed.')
return test2
daemon=Pyro4.Daemon()
daemon=Pyro4.Daemon(hostname)
ns=Pyro4.locateNS()
uri=daemon.register(QueryMaker)
ns.register("test.query", uri)
......@@ -42,5 +361,3 @@ ns.register("test.query", uri)
print("Ready. Object uri=", uri)
daemon.requestLoop()
#!/usr/bin/env python3
import sys
print('python version', sys.version)
import pandas
sys.path.insert(0,"/var/www/wsgi-scripts/")
from hdf_query import query_out
......@@ -10,18 +11,16 @@ def application(environ, start_response):
test=query_out(var1)
test1=test.to_json(orient='split')
test2=str(test1)
# test1=str(test)
output=bytes(test2,'utf-8')
output1 = b'Hello beautiful World!'
getstring = environ['QUERY_STRING']
# test += getstring.encode('utf-8')
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
# response_headers = [('Content-Disposition', 'attachment; filename= export.csv')]
# test.headers['Content-Disposition']='attachment; filename= export.csv'
# test.headers['Content-type']= 'text/csv'
start_response(status, response_headers)
return [output]
#!/usr/bin/env python3
import sys
import pandas
sys.path.insert(0,"/var/www/html/")
#from parquet_query import query_out
import Pyro4
import pandas
def application(environ, start_response):
status = '200 OK'
query_in =str( environ['QUERY_STRING'])
query_maker=Pyro4.Proxy("PYRONAME:test.query")
output=query_maker.query_out(query_in)
test2=query_maker.query_out(query_in)
output=bytes(test2,'utf-8')
getstring = environ['QUERY_STRING']
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
# response_headers = [('Content-Disposition', 'attachment; filename= export.csv')]
# test.headers['Content-Disposition']='attachment; filename= export.csv'
# test.headers['Content-type']= 'text/csv'
start_response(status, response_headers)
return [output]
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment