Let me first describe the server structure. My Wordpress is hosted inside a docker container along with nginx on lets say Server B (https://serverB.com). But I am trying to access the Wordpress site through (https://serverA.com/blogs). Now, if configure WP_HOME to https://serverB.com, everything runs smoothly, I can install Wordpress and everything. But if I change the WP_HOME to https://serverA.com/blogs, all of a sudden I am getting 404 - Not Found Error. (I downed the docker containers and deleted the volume).
I added the following line on wp-config.php as well.
404 - Not Found error is receiving on docker's nginx. That means the request has travelled all the way to the docker's nginx and then either it does not know what's happening or where's the file...
Error message from docker logs:
webserver_test | 192.168.192.1 - - [19/Apr/2021:04:44:18 +0000] "GET /blogs/wp-login.php HTTP/1.0" 404 556 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36"
I'm trying to write a ascii to baudot converter (for a teletype, obviously with dropped characters) using pseudoterminals. The idea is to have a pty master/slave pair, write to the slave, read from the master, convert ascii to baudot, and send to teletype. Input from the teletype will be read in, converted from baudot to ascii, sent to the master, and processed by the slave.
I can do this with a direct serial connection (screen /dev/pts/x), but agetty doesn't seem to work. I'm monitoring the master with
import pty import os m, s = pty.openpty() print(s, os.readlink(os.path.join('/proc/self/fd', str(s)))) while True: data = os.read(m, 1) print(data)
and can see data sent through screen, but not agetty. The agetty command I'm using is (where /dev/pts/2 is the slave file)
sudo /sbin/agetty 9600 /dev/pts/2
How do I start a getty on a pseudoterminal? Does getty look for a special response character (similar to a modem) before sending data? The getty process is not sending any data to my pseudoterminal.
As a small shared hosting service provider our MariaDB server is going to face a hardware limit very soon. So we want to add another machine for another MariaDB instance.
Our goals are:
Reduce disk/cpu/ram usage on current machine.
By reducing resources usage(1), there will be more room for current users/tables in this machine.
Easily scale to more machines in the future.
Our customers should not notice anything at all. They should not forced to change configurations of their softwares.
What I am thinking of is something like an instance which works as something like a proxy, which also knows that which database is on which instance and then automatically redirect queries to that instance, then receives and forwards the to the client.
Here are my Question:
Is this possible? What is it's technical name and how can we implement it in MySQL? Is there any better way to fulfill our goals? Bests,
I build JAVA EE project and choose glassfish as a server and mysql as a database, when i trying integrate mysql database in glassfish server, there are some errors : I fill properties database like name , server , PortNumber .. etc. when I test connection by press on ping button , this message displayed
An error has occurred Ping Connection Pool failed for DemoDatabase. Class name is wrong or classpath is not set for : com.mysql.cj.jdbc.Driver Please check the server.log for more details. An error has occurred Ping Connection Pool failed for DemoDatabase. Class name is wrong or classpath is not set for : com.mysql.cj.jdbc.Driver Please check the server.log for more details.
this message in Server.log
Cannot find poolName in ping-connection-pool command model, file a bug\ninjection failed on org.glassfish.connectors.admin.cli.PingConnectionPool.poolName with class java.lang.String
I'm trying to write a Windows service that will persistently connect and pull files from a network share on a windows 7 computer. Both computers are on a private network and the network share has read permissions set to "Everyone" and write permissions set to administrators. Neither computer is on a domain.
I'm able to access the network share through the GUI without entering a username or password. However, when I use the UNC path in a windows service running as a network service, it says the network UNC path doesn't exist. I've also tried to create a user on the Windows 10 computer with the same credentials as a non-administrative user on the windows 7 computer (as suggested here) with no luck there either.
So this is a minor inconvenience but I am curious if anyone well versed in email forwarding and Gmail can help me.
I have a vanity domain I'll call code@coder.dev. I am using it as an alias for my personal gmail code@gmail.com.
I compose the initial message, message_id=A, in Gmail, and send it through AWS SES SMTP.
AWS SES creates its own message_id=B and sends it to end user (stranger@gmail.com)
stranger@gmail.com replies with message_id=C, and sends it to AWS SES. It also sets References: B
My email forwarding Lambda forwards the message to me (code@gmail.com), with message_id=D.
Gmail does not show A in the same thread as D, on my end (code@gmail.com)
Note that if I reply to D, and stranger replies, and I reply back, etc. etc., all those messages are threaded together. Because at this point, we are building up the References: list with ids we have both seen before. It's only A that is left out.
What's funny/sad is message A also contains X-Gmail-Original-Message-ID: A, and that makes it to stranger@gmail.com, but then stranger doesn't send that header back in message C or use it in the References: list. Google doesn't know how to talk to itself :|
I have a test vhost on my web server for which I'm trying to enforce TLSv1.3-only but Apache refuses to disable TLSv1.2. TLSv1.3 does work however the following validation services all show that TLSv1.2 is still running on my vhost:
<VirtualHost XX.XX.XX.XX:443> ServerName testing.example.com DocumentRoot "/var/www/test" ErrorLog ${APACHE_LOG_DIR}/test-error.log CustomLog ${APACHE_LOG_DIR}/test-access.log combined # Include /etc/letsencrypt/options-ssl-apache.conf SSLEngine on SSLCompression off SSLCertificateFile /etc/letsencrypt/live/testing.example.com/fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/testing.example.com/privkey.pem Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains" # SSLCipherSuite "HIGH:!aNULL:!MD5:!3DES:!CAMELLIA:!AES128" # SSLHonorCipherOrder off SSLProtocol -all +TLSv1.3 SSLOpenSSLConfCmd DHParameters "/etc/ssl/private/dhparams_4096.pem" </VirtualHost>
info from "apachectl -S":
root@domain:~# apachectl -S VirtualHost configuration: XX.XX.XX.XX:80 is a NameVirtualHost ... (irrelevant) ... XX.XX.XX.XX:443 is a NameVirtualHost default server blah.example.com (/etc/apache2/sites-enabled/sites.conf:13) port 443 namevhost blah.example.com (/etc/apache2/sites-enabled/sites.conf:13) **port 443 namevhost test.example.com (/etc/apache2/sites-enabled/sites.conf:29)** port 443 namevhost blah.example.com (/etc/apache2/sites-enabled/sites.conf:54) port 443 namevhost blah.example.com (/etc/apache2/sites-enabled/sites.conf:93) port 443 namevhost blah.example.org (/etc/apache2/sites-enabled/sites.conf:111) port 443 namevhost blah.example.tk (/etc/apache2/sites-enabled/sites.conf:132) port 443 namevhost blah.example.com (/etc/apache2/sites-enabled/sites.conf:145) [XX:XX:XX:XX:XX:XX:XX:XX]:80 is a NameVirtualHost ... (irrelevant) ... [XX:XX:XX:XX:XX:XX:XX:XX]:443 is a NameVirtualHost ... (irrelevant; note the subdomain in question only has IPV4 DNS entry no IPV6) ... ServerRoot: "/etc/apache2" Main DocumentRoot: "/var/www/html" Main ErrorLog: "/var/log/apache2/error.log" Mutex fcgid-proctbl: using_defaults Mutex ssl-stapling: using_defaults Mutex ssl-cache: using_defaults Mutex default: dir="/var/run/apache2/" mechanism=default Mutex mpm-accept: using_defaults Mutex fcgid-pipe: using_defaults Mutex watchdog-callback: using_defaults Mutex rewrite-map: using_defaults Mutex ssl-stapling-refresh: using_defaults PidFile: "/var/run/apache2/apache2.pid" Define: DUMP_VHOSTS Define: DUMP_RUN_CFG Define: MODPERL2 Define: ENABLE_USR_LIB_CGI_BIN User: name="www-data" id=33 Group: name="www-data" id=33 root@domain:~#
I have it commented out of the vhost in question but other vhosts are using a letsencrypt/options-ssl-apache.conf which I'll include here in case it could be interfering somehow:
SSLEngine on SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1 SSLCipherSuite ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384 SSLHonorCipherOrder on SSLSessionTickets off SSLOptions +StrictRequire
Tl dr; is it possible to receive email for aliases of multiple domains on a single KVM?
I had a digital ocean server with multiple websites hosted on it, and needed email aliases of more than one of those domains. On several occasions, mail was not delivered, I believe it's possible that this is because the corresponding domain did not use a PTR record (Could be wrong, I'm new, here.)
The PTR records with DO are tied to droplet names, so it seemed impossible to have PTR records for multiple domains, thus I was stuck with incomplete MX records and that may have been the cause of my undelivered mail.
I was thinking, there must be a way around this issue, besides renting another KVM.
I'm getting to know how load balancers work in cloud platforms. I'm specifically talking about load balancers you use to expose multiple backends to the public internet here, not internal load balancers.
I started with GCP, where when you provision a load balancer, you get a single public IP address. Then I learned about AWS, where when you provision a load balancer (or at least, the Elastic Load Balancer), you get a host name (like my-loadbalancer-1234567890.us-west-2.elb.amazonaws.com).
With the single IP, I can set up any DNS records I like. This means I can keep my name servers outside of the cloud platform to set up domains, and I can do DNS challenges for Lets Encrypt because I can set a TXT record for my domain after setting an A record for it. With the host name approach, I have to use ALIAS records (AWS has to track things internally) so I have to use their DNS service (Route 53). This DNS difference is a slight inconvenience for me, because it's not what I'm used to, and if I want to keep my main name servers for my domain outside of AWS, I can. I would just delegate a subdomain of my domain to Route 53's name servers.
So far, this DNS difference is the only consequence of this load balancer architectural difference that I've noticed. Maybe there are more. Is there a reason GCP and AWS may have chosen the approaches they did, from an architecture perspective? Pros and cons?
I'm new to Nginx and Ubuntu - have been with windows server for over a decade and this is my first try to use ubuntu and Nginx so feel free to correct any wrong assumption I write here :)
My setup: I have an expressjs app (node app) running as an upstream server. I have front app - built in svelte - accessing the expressjs/node app through Nginx proxy_reverse. Both ends are using letsencrypt and cors are set as you will see shortly.
When I run front and back apps on localhost, I'm able to login, set two cookies to the browser and all endpoints perform as expected.
When I deployed the apps I ran into weird issue. The cookies are lost once I refresh the login page. Added few flags to my server block but no go.
I'm sure there is a way - I usually find a way - but this issue really beyond my limited knowledge about Nginx and proxy_reverse setup. I'm sure it is easy for some of you but not me. I hope one of you with enough knowledge point me in the right direction or have explanation to how to fix it.
Here is the issue: My frontend is available at travelmoodonline.com. Click on login. Username : mongo@mongo.com and password is 123. Inspect dev tools network. Header and response are all set correctly. Check the cookies tab under network once you login and you will get two cookies, one accesstoken and one refreshtoken.
Refresh the page. Poof. Tokens are gone. I no longer know anything about the user. Stateless.
In localhost, I refresh and the cookies still there once I set them. In Nginx as proxy, I'm not sure what happens.
So my question is : How to fix it so cookies are set and sent with every req? Why the cookies disappear? Is it still there in memory somewhere? Is the path wrong? Or the cockies are deleted once I leave the page so if I redirect the user after login to another page, the cookies are not showing in dev tools.
My code : node/expressjs server route code to login user:
app.post('/login', (req, res)=>{ //get form data and create cookies res.cookie("accesstoken", accessToken, { sameSite: 'none', secure : true }); res.cookie("refreshtoken", refreshtoken, { sameSite: 'none', secure : true }).json({ "loginStatus": true, "loginMessage": "vavoom : doc._id }) }
Frontend - svelte - fetch route with a form to collect username and password and submit it to server:
# Default server configuration # server { listen 80 default_server; listen [::]:80 default_server; root /var/www/defaultdir; index index.html index.htm index.nginx-debian.html; server_name _; location / { try_files $uri $uri/ /index.html; } } # port 80 with www server { listen 80; listen [::]:80; server_name www.travelmoodonline.com; root /var/www/travelmoodonline.com; index index.html; location / { try_files $uri $uri/ /index.html; } return 308 https://www.travelmoodonline.com$request_uri; } # port 80 without wwww server { listen 80; listen [::]:80; server_name travelmoodonline.com; root /var/www/travelmoodonline.com; index index.html; location / { try_files $uri $uri/ /index.html; } return 308 https://www.travelmoodonline.com$request_uri; } # HTTPS server (with www) port 443 with www server { listen 443 ssl; listen [::]:443 ssl; add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"; server_name www.travelmoodonline.com; root /var/www/travelmoodonline.com; index index.html; ssl_certificate /etc/letsencrypt/live/travelmoodonline.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/travelmoodonline.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot location / { try_files $uri $uri/ /index.html; } } # HTTPS server (without www) server { listen 443 ssl; listen [::]:443 ssl; add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"; server_name travelmoodonline.com; root /var/www/travelmoodonline.com; index index.html; location / { try_files $uri $uri/ /index.html; } ssl_certificate /etc/letsencrypt/live/travelmoodonline.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/travelmoodonline.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot } server { server_name foodmoodonline.com www.foodmoodonline.com; # localhost settings location / { proxy_pass http://localhost:3000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; # proxy_cookie_path / "/; secure; HttpOnly; SameSite=strict"; # proxy_pass_header localhost; # proxy_pass_header Set-Cookie; # proxy_cookie_domain localhost $host; # proxy_cookie_path /; } listen [::]:443 ssl; # managed by Certbot listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/foodmoodonline.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/foodmoodonline.com/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot } server { if ($host = www.foodmoodonline.com) { return 301 https://$host$request_uri; } # managed by Certbot if ($host = foodmoodonline.com) { return 301 https://$host$request_uri; } # managed by Certbot listen 80; listen [::]:80; server_name foodmoodonline.com www.foodmoodonline.com; return 404; # managed by Certbot }
I tried 301-302-307 and 308 after reading about some of them covers the GET and not POST but didn't change the behavior I described above. Why the cookie doesn't set/stay in the browser once it shows in the dev tools. Should I use rewrite instead of redirect???? I'm lost.
Not sure is it nginx proxy_reverse settings I'm not aware of or is it server block settings or the ssl redirect causing the browser to loose the cookies but once you set the cookie, the browser suppose to send it with each req. What is going on here?
I've got a Google Cloud VM, which I'm currently running on the free tier. This gives me 1GB free egress per month before I start getting charged.
Because of this, I want to hard limit the egress of the VM to never exceed this cap in a given month.
After searching for how to do this for a while, every piece of info seems to be generalised traffic shaping to limit peak bandwidth, rather than setting monthly limits.
Eventually I stumbled across this guide, which implies what I want to do is possible with tc. However, this particular use case doesn't suit my needs as the bandwidth limits reset at the start of the calendar month and this seems to be a rolling limiter.
Ideally, I would like this to work in two tiers. The first is 900MB of carefree usage per calendar month, which can be used as quickly or as slowly as is needed. Once that has been used, the remaining 100MB should be allocated as is described in the guide linked above, accumulating in the bucket. Then, at the end of the calendar month, all limits are reset.
Is there a simple way to go about this?
Annoyingly GCP doesn't have the ability to monitor the cumulative egress and set an alert, the best I can do is set a budget alert once I've been charged. I'd ideally like to stop this from happening before, rather than after I'm charged.
2021/03/15 00:15:52 [emerg] 1#1: host not found in upstream "posi:8000" in /etc/nginx/conf.d/posi.conf:2 nginx: [emerg] host not found in upstream "posi:8000" in /etc/nginx/conf.d/posi.conf:2
I have two Windows Server 2016 with Hyper-V installed. Each server has two ethernet adapters. And each Hyper-V has several VMs. My goal is VMs can communicate with each other if they fall into the same VLAN.
In order to make the network connection redundancy, I created the network teaming on the physical machine. The teaming is using "Switch Independent" with "Address Hash" options. On the Virtual Switch Manager, I created an external adapter by selecting the teamed adapter (Microsoft Network Adapter Multiplexor Driver).
Under each VM, I create a virtual adapter with VLAN tagged.
However, the VMs in the same VLAN cannot communicate with each other.
On the switch side, I have already configured trunk mode for all the ports connected with the physical machines.
If I remove the teaming, the VMs can communicate with VLAN tags. How to address this issue?
I'm in the process of reconfiguring Outlook 2016 clients with an Exchange 365 backend. The majority of my users need access to one or more shared mailboxes to receive and to send e-mail. Using the default option to give these users full mailbox access to the shared mailboxes, that is easily and automatically accomplished. With some tweaking (Set-MailboxSentItemsConfiguration), I can even have a copy of send items stored in the send items folder of the shared mailbox, so everyone is up to date of what is being send. Nice.
But I also need to have seperate signatures for all mailboxes and I also need to be able to configure different local cache period settings. For the primary mailbox I need to keep a local copy of about 6 months (for fast searching), but for the shared mailboxes one month would do. This keeps the local .ost files a lot smaller, compared to the scenario where all shared mailboxes have the same cache period.
The only way I know how to accomplish this, is by using extra Outlook accounts instead of using extra Outlook mailboxes. Now I need the find a way to add the extra accounts automatically to the Outlook profile. In the pre Exchange 365 era, I would have used Microsoft's Office Customization Tool to create a basic .prf file, use VBscript to find the shared mailboxes the current user has access to and add these to the .prf profile. Have the user start Outlook with the /importprf switch, and voila.
But now I'm already stuck at creating the .prf file with the OCT. What to use for Exchange Server name? This weird guid you find after manually configuring Outlook with Exchange 365? Maybe the OCT is not the best option. I also found a PowerShell tool called PowerMAPI (http://powermapi.com) but it's hard to find out if this works with Exchange 365. The same goes for Outlook Redemption (http://www.dimastr.com/redemption/home.htm). Does anyone have experience with these tools? Or am I making this far more complicated than needed? I'm open to all suggestions...
These messages appear in the error log about two to three seconds after the end of my page load. However, the complete page load takes only a few seconds. I am using mod_reqtimeout with this setting:
RequestReadTimeout header=20-40,minrate=500
Since the page load only takes a few seconds I do not understand why the Request header read timeout messages are being logged to the error log.
Why are these messages appearing and what can I do to remedy this?
I'm running Ubuntu 15.10 server on a Asrock E3C226D2I board. When I get a kernel update or run update-initramfs -u I get a warning about missing firmware:
root@fileserver:~# update-initramfs -u update-initramfs: Generating /boot/initrd.img-4.2.0-27-generic W: Possible missing firmware /lib/firmware/ast_dp501_fw.bin for module ast
I can't find much information on this particular firmware, other than it is probably for my video card. Since I'm running a server I don't really care about graphics (no monitor attached).
All works fine so I'm ignoring it for now but is there a way to fix this?
I am facing a problem remoting into a machine using a Domain account.
Problem Facts :
The Host VM's are hosted by Company A (read Domain A). The VM's have a local administrator as well Domain 'A' based User accounts who are on "Administrators" on the VM's.
I belong to a Company B (Domain B).
I use a VPN provided by Company A to have access to their network.
I was previously able to use mstsc from Computer on Domain B to remote into any of VM's on Domain A.
Recently Company A migrated their Domain A into Domain Z.
Now I am not able to remote from a computer on Domain B into a VM on Domain Z using my Domain 'Z' user account, however, I am able to login using the local user account. The error for Domain Account is generic credentials not valid.
My domain 'Z' accounts are working when I remote access another VM (say VM1) using my domain account after logging into a VM2 as local admin. (VM 1 & 2 are on the Domain Z)
The problem in step 6 & 7 only SEEM to occur in environment at Domain Based environment. (Domain B where my local machine is located on and Domain C where another company user is facing the same issue as me).
When trying from a local machine with windows freshly installed (no domain, no AV, default OS options) over Company A provided VPN, everything works fine i.e can remote into VM using Domain Accounts.
Windows 7 Enterprise as Guest. Windows 7 , 2008 R2 , 8.1 as guest VMs. 11. On guest machine, tried deactivating firewall, stopping Forefront security app and removing machine from Domain and connecting to internet directly, but still it was not connecting. (maybe some group policy is causing the issue and removing from domain does not deactivate the policy. The surprising factor was people from Company C were also facing the same issue).
Trying to get a basic Django app running on nginx using UWSGI. I keep getting a 502 error with the error in the subject line. I am doing all of this as root, which I know is bad practice, but I am just practicing. My config file is as follows (it's included in the nginx.conf file):
As far as I can tell I am passing all requests on port 80 (from nginx.conf) upstream to localhost, which is running on my VH, where uwsgi is listening on port 8080. I've tried this with a variety of permissions, including 777. If anyone can point out what I'm doing wrong please let me know.
The iLO web interface allows me to upload a .bin file (Obtain the firmware image (.bin) file from the Online ROM Flash Component for HP Integrated Lights-Out.)
The iLO web interface redirects me to a page in the HP support website (http://www.hp.com/go/iLO) where I am supposed to find this .bin firmware, but no luck for me. The support website is a mess and very slow, badly categorized and generally unusable.
Where can I find this .bin file? The only related link I am able to find asks me about my server operating system (what does this have to do with the iLO?!) and lets me download an .iso with no .bin file
And also a related question: what is the latest iLO 3 version? (for Proliant DL380 G7, not sure if the iLO is tied to the server model)
I'm running jboss-as 7.2. I'm trying to configure all log files to go to /var/log/jboss-as but only the console log is going there. I'm using the init.d script provided with the package and it calls standalone.sh. I'm trying to avoid having to modify startup scripts.
I've tried adding JAVA_OPTS="-Djboss.server.log.dir=/var/log/jboss-as" to my /etc/jboss-as/jboss-as.conf file but the init.d script doesn't pass JAVA_OPTS to standalone.sh when it calls it.
The documentation also says I should be able to specify the path via XML with the following line in standalone.xml:
However, it doesn't say where in the file to put that. Every place I try to put it causes JBoss to crash on startup saying that it can't parse the standalone.xml file correctly.
Sorry for this foolish question, but I have less knowldge about servers. So bear with me!
I have configured Citadel as directed in linode documentation and can login using the front-end for accessing citadel. I can send emails using that. How can i configure smtp and use it as a mail service for sending emails from laravel which is a php framework?. Any help will be appreciated.
I have configured it as
Enter 0.0.0.0 for listen address Select Internal for authentication method Specify your admin <username> Enter an admin <password> Select Internal for web server integration Enter 80 for Webcit HTTP port Enter 443 for the Webcit HTTPS port (or enter -1 to disable it) Select your desired language
After this i have entered mail name in /etc/mailname as
/* |-------------------------------------------------------------------------- | SMTP Host Address |-------------------------------------------------------------------------- | | Here you may provide the host address of the SMTP server used by your | applications. A default option is provided that is compatible with | the Postmark mail service, which will provide reliable delivery. | */ 'host' => 'mail.hututoo.com', /* |-------------------------------------------------------------------------- | SMTP Host Port |-------------------------------------------------------------------------- | | This is the SMTP port used by your application to delivery e-mails to | users of your application. Like the host we have set this value to | stay compatible with the Postmark e-mail application by default. | */ 'port' => 25, /* |-------------------------------------------------------------------------- | Global "From" Address |-------------------------------------------------------------------------- | | You may wish for all e-mails sent by your application to be sent from | the same address. Here, you may specify a name and address that is | used globally for all e-mails that are sent by your application. | */ 'from' => array('address' => 'no-reply@hututoo.com', 'name' => null), /* |-------------------------------------------------------------------------- | E-Mail Encryption Protocol |-------------------------------------------------------------------------- | | Here you may specify the encryption protocol that should be used when | the application send e-mail messages. A sensible default using the | transport layer security protocol should provide great security. | */ 'encryption' => 'tls', /* |-------------------------------------------------------------------------- | SMTP Server Username |-------------------------------------------------------------------------- | | If your SMTP server requires a username for authentication, you should | set it here. This will get used to authenticate with your server on | connection. You may also set the "password" value below this one. | */ 'username' => 'passname', /* |-------------------------------------------------------------------------- | SMTP Server Password |-------------------------------------------------------------------------- | | Here you may set the password required by your SMTP server to send out | messages from your application. This will be given to the server on | connection so that the application will be able to send messages. | */ 'password' => 'paswwordtest', /* |-------------------------------------------------------------------------- | Sendmail System Path |-------------------------------------------------------------------------- | | When using the "sendmail" driver to send e-mails, we will need to know | the path to where Sendmail lives on this server. A default path has | been provided here, which will work well on most of your systems. | */ 'sendmail' => '/usr/sbin/citmail -t',
However, there are some local users used for services. When I try to change the password for one of those, as root, it asks for Current Kerberos password then exits:
passwd service1 Current Kerberos password: (I hit enter) Current Kerberos password: (I hit enter) passwd: Authentication token manipulation error passwd: password unchanged
If I switch to the local user and do passwd, it asks once for Kerberos then falls back to local: $ passwd Current Kerberos password: Changing password for service1. (current) UNIX password:
My configuration is similar to the site I posted above, and everything works fine, I just can't change the local users' passwords as root.
The server RDP certificate expires every 6 months and is automatically recreated, meaning I need to re-install the new certificate on the client machines to allow users to save password.
Is there a straightforward way to create a self-signed certificate with a longer expiry?
I have 5 servers to configure.
Also, how do I install the certificate such that terminal services uses it?
Note: Servers are not on a domain and I'm pretty sure we're not using a gateway server.
I have a Debian 6 server and I was previously using Apache with mod_php but decided to switch to using fcgi instead since Wordpress was somehow causing Apache to crash. I have the following in my site's Apache config file:
Everything works fine if I don't include the SuexecUserGroup, but it obviously then runs the script as www-data instead of the user and group above. When I include that line, I get a 500 error and the following shows up in my suexec.log file:
[2013-05-22 16:00:12]: command not in docroot (/usr/lib/cgi-bin/php5)
Everything was installed using the packages, so I don't even know where the docroot is.
I have a small server room approx 7' x 12' with an A/C unit dedicated to this room that is positioned on of the short (7') sides and blows air across the room towards the other short (7') side.
The server room is set to temp of 69F but usually will only ever get down to around 70-71F (temp measured by the thermostat control panel on the wall).
I have two 1-wire temp. monitor gauges plugged into a linux box that graphs out measured temperatures. Right now the temp. monitor gauges hang on one of the long (12') sides and are positioned closely together.
I don't think this is ideal measurement and an accurate representation of the room's real temperatures and would like to fix this. Where is it best to position the temperature sensors in a room like this? I don't think hanging them from the drop-ceiling would work since then the A/C unit would blow cold air on them (skewing the measurements terribly).
I know this question might sound too easy and I should had read all docs available on internet, the true is that I did, and I had no luck, its kinda confusing for me, I have installed many times this thing but for Apache, never for Tomcat.
I want to install a certificate from GoDaddy, so, I followed this instructions
I have several standalone Win2008 (R1+R2) servers (no domain) and each of them has dozens of scheduled tasks. Each time we set up a new server, all these tasks have to be created on it.
The tasks are not living in the 'root' of the 'Task Scheduler Library' they reside in sub folders, up to two levels deep.
I know I can use schtasks.exe to export tasks to an xml file and then use:
schtasks.exe /CREATE /XML ...'
to import them on the new server. The problem is that schtasks.exe creates them all in the root, not in the sub folders where they belong. There is also no way in the GUI to move tasks around.
Is there a tool that allows me to manage all my tasks centrally, and allows me to create them in folders on several machines? It would also make it easier to set the 'executing user and password'.
2021 Debian 项目负责人 (DPL, Debian Project Leader) 的选举结果已公布,Jonathan Carter 再次当选为新一任的 DPL,他的新任期将从2021-04-21开始。 被提名参与本届选举的人员总共两名,分别是 Jonathan Carter 和前两届 DPL Sruthi Chandran。 Jonathan Carter [jcc@debian.org] [nomination mail] [platform] Sruthi...
I landed upon this expression while solving a problem; $$\vec a×\vec b (\vec a . \vec c)-\vec a×\vec c(\vec a .\vec b )$$ To simplify this, I thought of factoring the $\vec a$ out, and it seemed okay to do so, since dot products are distributive. But I don't know what's going to happen to the $\vec c$ and $\vec b$ it was dotted with when it's taken out. Together, each pair of dotted vectors formed a scalar multipier for each vector term, but now that I've taken the $\vec a$ out, I have no idea how to treat the other two.
Is taking the $\vec a $ out wrong here? If so, why?
I know this one is a soft question so I will tag it as such, but I'm aware of the following ways in which derivatives are represented:
lets say we have a function: $$f=f(x,y)$$ then the derivative wrt $x$ e.g. is: $$\frac{\partial f}{\partial x}$$ but often it will be written as: $$\partial_xf,\,f_x$$ and several other forms. The reason I ask is because I find when you have equations with large amounts of derivates it can be tedious to type or write them all out fully, e.g.: $$\frac{\partial^2T}{\partial r^2}+\frac1r\frac{\partial T}{\partial r}+\frac{1}{r^2}\frac{\partial^2T}{\partial\theta^2}=0$$ can be nicely abbreviated to: $$\partial^2_rT+\frac{\partial_rT}{r}+\frac{\partial^2_\theta T}{r^2}=0$$ or maybe: $$T_{rr}+\frac1rT_r+\frac1{r^2}T_{\theta\theta}=0$$ Does anyone have any opinions on which shorthand is the least ambiguous or is more generally accepted, or if you have any other interesting notation I'd like to see. I am also aware of common notation like: $\Delta,\nabla$
I am asked to think of an example of cardinality being the same between two sets X and Y such that the function from X to Y is one to one but it is not onto. I am so confused about this one because I thought there has to be a one-to-one correspondence between X and Y for their cardinality to be the same. How is it possible for there to just be a one-to-one relationship? Can anyone please explain? Thank you!
Is there a mathematical terminology for non-numerical variable such as
Head Tail (referring to a coin flip) or Rock Paper Scissors or Up/Down ?
For example, I want to say that my set $\mathcal{S} $ is filled with ______, where _______ are things like Head, Tail/ Left, Right/ Up, Down, etc. One such set is $\mathcal{S} $ = {Head, Tail}, another set is $\mathcal{S}$ = {Left, Right}, yet another set is $\mathcal{S}$ = {Happy, Sad, Mad}.
Note that I am not assuming a probability context. Non-numerical variables appear all over math, such as in game theory (Confess/Betray).
What should I use as the proper mathematical terminology in the above blanks: States? Strings? Tokens? Literals? Symbols? Alphabets? Words?
I've been working on this question for about 3 hours now. Part (a) asks to show that the derivative of the unit speed parameterization function is perpendicular to its second derivative. As far as I understand, this is only necessarily true in the case of a circle, but my instructor told me it works in the case of any curve. As for part (b), I'm confused about the notation since the notation used on the assignment is different from any parameterization notation I've seen elsewhere. Is the "unit speed parameterization" a single equation?
Given an arbitrary metric $d$, I want to define a concave continuous function in terms of $d$. For example if $d$ is the Euclidean metric then it is convex, so $-d$ is concave.
Ideally there would be some function which transforms any metric into a concave continuous function but I don't think that's very likely. Instead I wonder if it is possible classify the kinds of metrics and give a function for each kind of metric, like in the following manner:
Suppose every metric is either concave or convex, then either $d$ or $-d$ is a concave continuous function.
So is there some classification of metrics in terms of (quasi/strict) convexity and concavity?
Alternatively let me know if you have good reason to believe that it is impossible to construct a concave continuous function from an arbitrary metric
9-19. Let $M$ be $\mathbb{R}^{3}$ with the $z$ -axis removed. Define $V, W \in \mathfrak{X}(M)$ by $$ V=\frac{\partial}{\partial x}-\frac{y}{x^{2}+y^{2}} \frac{\partial}{\partial z}, \quad W=\frac{\partial}{\partial y}+\frac{x}{x^{2}+y^{2}} \frac{\partial}{\partial z} $$ and let $\theta$ and $\psi$ be the flows of $V$ and $W$, respectively. Prove that $V$ and $W$ commute, but there exist $p \in M$ and $s, t \in \mathbb{R}$ such that $\theta_{t} \circ \psi_{s}(p)$ and $\psi_{s} \circ \theta_{t}(p)$ are both defined but are not equal.
Which is obvious not equal,but they both defined for all $\Bbb{R}^3\setminus \{z\}$,but it contradict to the theorem 9.44 that vector field commute if and only if flow commute?If I haven't made mistake in the computation.Is my computation correct,it's so hard to compute.Why does it not consistent with the theorem?
Let $X,Y$ be two topological spaces and $f,g:X\rightarrow Y$ be two homotopic continuous map between them. My question is when will we have the attaching result space $X\bigsqcup_f Y$ and $X\bigsqcup_g Y$ are homotopic? Well, intuitively, $f,g$ give the way of attaching, and if $f\simeq g$, then we can think the attaching way determined by $f$ can be transformed continuously to the attaching wat determined by $g$, and hence the resulting attaching space should be homotopy. But I doubt this result is not true in general, otherwise this should be proved explicitly in standard textbooks of algebraic topology. So my question is when will the above true? Well, by asking this, I'm not going to pursue the most general result, I want some special cases and a justification why my intuition above failed in general. Thanks.
Aside: the reason I came up with this question is because I want to find the relation between homotopy of continuous maps and the homotopy of spaces, more explictly, I want to know is there any sense in which homotopy maps induce homotopy spaces?
In Stein's real analysis, I'm trying to prove that if F is of bounded variation, then it is differentiable. To do this, it uses Dini numbers.
Let $\Delta_h (F)(x) = {F(x+h) - F(x) \over h}$. We consider four Dini numbers at $x$. \begin{align*} D^+(F)(x) &= \limsup_{\substack{h \to 0 \\ h > 0}} \Delta_h (F)(x) \\ D_+(F)(x) &= \liminf_{\substack{h \to 0 \\ h > 0}} \Delta_h (F)(x)\\ D^-(F)(x) &= \limsup_{\substack{h \to 0 \\ h < 0}} \Delta_h (F)(x) \\ D_-(F)(x) &= \liminf_{\substack{h \to 0 \\ h < 0}} \Delta_h (F)(x)\\ \end{align*} To prove this thoerem, it suffices to show that (i) $D^+(f)(x) < \infty $ for a.e. $x$, and (ii) $D^+(F)(x) \le D_-(F)(x)$ for a.e. $x$. Indeed if these results hold, then by applying (ii) to $-F(-x)$ instead of $F(x)$ we obtain $D^-(F)(x) \le D_+(F)(x)$ for a.e. $x$. Therefore, $$D^+ \le D_- \le D^- \le D_+ \le D^+$$
I am currently reading Atiyah's book about Commutative Algebra and someone gave me this question that I can't figure out quite right: Which one of the following extensions: $\mathbb{Z}[\frac{1 + \sqrt{5}}{2}]$ and $\mathbb{Z}[\frac{1 + \sqrt{3}}{2}]$ is an integral extension over $\mathbb{Z}$ and which one is not. To be honest, I've tried using the fact that $\mathbb{Z}[x]$ must be finitely generated for every $x \in \mathbb{Z}[\frac{1 + \sqrt{5}}{2}]$ for the last one to be an integral extension (here I can see easily that $x = a + b(\frac{1 + \sqrt{5}}{2})$ for some a,b integers), but I've struggled to see how can I derive the proposition I want. If anyone could show me the proof of this and the reason why any of those two is not an integral extension, I would really appreciate it.
Let $(u_n)$ and $(v_n)$ be two real sequences with limits L and M respectively. If $x_n$= max$({u_n,v_n})$ and $y_n$=min$(u_n,v_n)$, prove that the sequence $x_n$ and $y_n$ converges to max$(L,M)$ and min$(L,M)$ respectively.
My attempt: It is given that $u_n$ and $v_n$ converges to L and M respectively. So $u_n$+$v_n$ =$L+M$.
The typical CAGR formula assumes that upon receipt of initial profits, all profits can be reinvested at the same rate of return. This is useful for people investing in the stock market where they can buy fractions of a share, however, what if an investment required fixed amounts of capital for reinvestment? For example, if someone bought and sold a home for a profit of 10%, they could not then go buy 1.1 homes even though they now would have 110% of their original starting capitol. Assuming a consistent growth rate and cost of investment, they would need to repeat this action 10 times before they could buy two homes at once. Then, of corse, they would only need to repeat the process 5 times on the two homes before moving up to three homes at a time ... we all get the point. Does anyone know of a formula for this?
Edit: I am aware my example is imperfect - obviously in the real world real estate investors use the extra capital to buy nicer homes. It was the best hypothetical I could do to get the point across. Another effective example would be trading in the stock market without fractional shares.
I'm trying to solve this irrational integral $$ \int \frac{x^3}{\sqrt{x^2+x}}\, dx$$ doing the substitution
$$ x= \frac{t^2}{1-2 t}$$ according to the rule.
So the integral becomes: $$ \int \frac{-2t^6}{(1-2t)^4}\, dt= \int (-\frac{1}{8}t^2-\frac{1}{4}t-\frac{5}{16}+\frac{1}{16}\frac{-80t^3+90t^2-36t+5}{(1-2t)^4})\, dt=\int (-\frac{1}{8}t^2-\frac{1}{4}t-\frac{5}{16}+\frac{1}{16}(\frac{10}{1-2t}-\frac{15}{2} \frac{1}{(1-2t)^2}+\frac{3}{(1-2t)^3}-\frac{1}{2} \frac{1}{(1-2t)^4}))\, dt=-\frac{1}{24}t^3-\frac{1}{8}t^2-\frac{5}{16}t-\frac{5}{16}\cdot \ln|1-2t| -\frac{15}{64}\frac{1}{1-2t}+\frac{3}{64} \frac{1}{(1-2t)^2}-\frac{1}{16 \cdot 12} \frac{1}{(1-2t)^3}+cost $$ with $t=-x+ \sqrt{x^2+x}$.
The final result according to my book is instead $(\frac{1}{3}x^2-\frac{5}{12}x+\frac{15}{24})\sqrt{x^2+x}-\frac{5}{16}\ln( x+\frac{1}{2}+ \sqrt{x^2+x})$
And trying to obtain the same solution putting t in the formulas I'm definitely lost in the calculation... I don't understant why this difference in the complexity of the solution... Can someone show me where I'm making mistakes?
I was reading this Wikipedia article: https://en.wikipedia.org/wiki/Von_Neumann_universe, and it mentions that the Axiom of Replacement is required to go outside of $V_{\omega+\omega}$, one of the levels of the von Neumann Hierarchy. If I'm correct, this means that the Axiom of Replacement would be required to construct a set of cardinality $\beth_\omega$, since such sets would only exist in higher levels of the hierarchy.
But now let the set $N_0 = \mathbb N$, and let $N_{i+1} = P(N_i)$, where $P(N_i)$ is the power set of $N_i$. Now let the set $A$ be the union of all $N_i$ for each $i \in \mathbb N$. Now it seems to me that $A$ has cardinality $\beth_\omega$. Certainly, $N_i$ has cardinality $\beth_i$, and $A$ cannot have cardinality $\beth_m$ for any natural number $m$, because $A$ contains $N_{i+1}$, which has a strictly larger cardinality. So the cardinality of $A$ must be higher than $\beth_m$ for all $m\in \mathbb N$. Further, sets of cardinality $\beth_{\omega + 1}$ or higher can be easily constructed by taking $P(A)$ and so on.
If $A$ does not have cardinality $\beth_\omega$, then how so? And if it does, where in this construction is the Axiom of Replacement invoked?
$N_0$ (or something analogous) exists by the Axiom of Infinity. Then all other $N_i$ exist by the Axiom of Power Set. Admittedly, there is then some subtlety in applying the Axiom of Union, since sets must be in a larger set together before a union can be taken. One might try to construct $A$, or at least a similar set, by repeatedly using the Axiom of Pairing and the Axiom of Union, but this doesn't seem to work, admittedly.
But I'm still unsure how exactly adding the Axiom of Replacement solves this problem. If we want to invoke the Axioms of ZFC explicitly in the construction of $A$, where and how does the Axiom of Replacement come into play?
I currently have the topic Newtonian gravity, which is described as a field theory by means of the Poisson equation $$\Delta \phi = 4\pi G\rho\,.$$ As an assignment I have: derive the equation of motion for test masses. (1 point, short task)
I can do something with the single words, but their combination to this sentence, I don't understand. What should I derive? And from what? Should I derive according to the mass, so $\frac{d}{dm}$? And why? What is the point of that? Where should this lead me. This task confuses me.
I'm trying to prove that if for $P,Q,Z\in\mathbf(M)(n,\mathbf{C})$ it holds that $exp(\sigma P)exp(\tau Q)=exp(\hbar \sigma\tau Z)exp(\tau Q)exp(\sigma P)$ for all $\sigma, \tau\in \mathbf{R}$, then $[P,Q]=Z$, $[P,Z]=0$, and $[Q,Z]=0$.
To prove $[P,Q]=Z$, one can just differentiate with respect to $\sigma$, and then $\tau$, but as far as I can tell, differentiating does not yield the other two as easily.
What should I begin with and what should i learn.Also can you tell me the best textbooks in which i can find problems that explain concepts as well as critical thinking as well as creative thinking.Also please tell me about the best youtube channels out there that explain maths at a masters level.So I could puruse my phd when i get a little older than iam currently now.Also can you please inform about how should i apply to prestigous universities and what are their requirments.
I am confused on this problem. My professor gave this as the solution:
$S_{N_{T}}$ is the time of the last arrival in $[0, t]$. For $0 < x \leq t, P(S_{N_{T}} \leq x) \sum_{k=0}^{\infty} P(S_{N_{T}} \leq x | N_{T}=k)P(N_{T}=k) $
If $N_t = 0$, then $S_{N_{T}} = S_0 =0$. This occurs with probability $P(N_t = 0) = e^{- \lambda t}$.
Therefore, the cdf of $S_{N_{T}}$ is: $P(S_{N_{T}} \leq x) = \begin{array}{cc} \{ & \begin{array}{cc} 0 & x < 0 \\ e^{- \lambda (x-t)} & 0\leq x\leq t \\ 1 & x \geq t \end{array} \end{array}$
I don't really understand the part of creating the variable M of the maximum of k i.i.d. random variables in order to solve the problem. Any help would be greatly appreciated, thank you!
Regard $S.\mu$ as a graded $S-$module, $\operatorname{deg}\mu = d.$ Then the $s.e.s$ is graded. So for each $i \in \mathbb Z,$ we get $s.e.s$ of vector spaces
$$0 \rightarrow (S/J)_{i - d} \rightarrow (S/I')_{i} \rightarrow (S/I)_i \rightarrow 0 $$ which implies that $H_{S/I}(i) + H_{S/J}(i -d) = H_{S/I'}(i)$ and this gives us an algorithm to find $H_{S/I}(i).$
Now, here is her solution to the question I mentioned above:
Now, since $H_{S/J^{'}}(i - 2) = i - 2, i \geq 2,$ it remains to find $H_{S/I^{''}}(i)$
Now, since we have $S/I^{''} = \frac{k[x_1, x_3]}{(x_1 x_3)}[x_2, x_4],$ a polynomial ring over the indeterminates $ x_2, x_4,$ monomials: $x_1^a x_{2}^b x_4^c, x_3^ax_2^b x_4^c,$ then $$S/I^{''} = k[x_1, x_2, x_4] + k[x_3, x_2, x_4]$$
But then my professor wrote $H_{S/I^{''}}(d) = 2 \frac{(d + 1)(d + 2)}{2} - (d + 1) = (d + 1)^2$. She said that multiplication by $2$ because we have $$S/I^{''} = k[x_1, x_2, x_4] + k[x_3, x_2, x_4]$$ is the addition of $2$ polynomial rings (is that reasoning correct ?) and the subtraction of $d+1$ because we have counted $x_2x_4$ twice but I do not understand why the number of $x_2x_4$ is $d+1,$ could someone explain this to me please?
Also, I know that $H_{S/I}(i)$ should be called Hilbert function of dimension $i$, but what should be called Hilbert polynomial and it is polynomial in which determinants? Could anyone clarify this to me, please?
Now we can almost extract the intended term($S^{'}_{n}$): $$ \sum_{k=2}^{n} {{2n-1-k} \choose {n-1}} (a^{n}b^{n-k}x^{k} + a^{n-k}b^ny^{k}) + \sum_{k=2}^{n} {{2n-1-k} \choose {n-1}} (a^{n-k+1}b^{n-1}xy^{k-1} + a^{n-1}b^{n-k+1}x^{k-1}y) + \sum_{k=2}^{n} (\frac{n-1}{2n-1-k}-1) {{2n-1-k} \choose {n-1}} [...] $$
There is further derivation but not seems very promising. The idea of this theorem is really interesting. I'm asking for a simpler approach or how i should countinue my proof. Thank you in advance!
I would like to illustrate the double layer potential idea with a simple 1d example, but seem to run into a situation where the resulting integral equation is singular.
The problem is $u''(x) = 0$ on $[0,1]$, subject to $u(0) = a$, $u(1) = b$. A free-space Green's function for this problem is given by $G_0(x,y) = \frac{1}{2}|x-y|$. This satisfies four desirable properties of the free-space Green's function :
$G_0(x,y)$ is continuous on $[0,1]\setminus y$.
$\partial^2 G_0(x,y)/\partial x^2 = 0$ on $[0,1]\setminus y$
where $H(x)$ is the Heaviside function. To get an integral equation, I evaluate the above at the endpoints $x = 0^+$ and $x = 1^+$, where "+" indicates taking a limit as $x$ approaches boundary point from within the interval $[0,1]$. The resulting integral equation is given by
which is clearly singular, and can only be solved if $a = b$.
My question is, where did I go wrong? Or, if the above is correct, is there an explanation for why the 1d double layer potential doesn't exist for $a \ne b$?
I have considered the following ideas :
This is really a 2d problem in an infinite strip, and as such, maybe the "boundary" isn't really closed, and so therefore, the solution cannot be expressed as a double layer potential. This sounds dubious, however, since harmonic functions certainly exist in infinite and semi-infinite domains.
Design a different dipole expression by solving $w''(x) = -\delta'(x)$ and choosing constants of integration to satisfy jump conditions in the potential at $x=0$ and $x=1$. For example, $w(x) = -H(x) + \frac{1}{2}(x) + \frac{1}{2}$ works. This leads to the potential
with $\mu(0) = 2a$ and $\mu(1) = 2b$. This satisfies necessary double-layer jump conditions, but the dipole representation is not obviously the derivative of a free-space Green's function.
The solvability issue goes away if $H(0)$ is defined to be $1/2$. In this case, the dipole densities become $\mu(0) = 2b$, and $\mu(1) = 2a$. But the solution is still not the harmonic function $a(1-x) + bx$.
The following is an exercise from Bruckner's Real Analysis (the book in second line not "elementary" one):
Let $a_n$ be a sequence of positive numbers converging to zero. If $f$ is continuous, then certainly $f (x − a_n)$ converges to $f (x)$. Find a bounded measurable function on $[0, 1]$ such that the sequence of functions $f_n(x)=f (x − a_n )$ is not a.e. convergent to f [Hint:Take the characteristic function of a Cantor set of positive measure.]
I don't understand how "$f_n(x)=f (x − a_n )$ is not a.e. convergent to f" can happen at all : $f (x − a_n )$ are 'moving' to become $f(x)$ as $n \to \infty$ and we only have a solution for not a.e. convergent to f when we consider any different $f$ that is not a.e. limit of $f_n$? So what is the use of fat Cantor sets here?
Let $n>2$ and parentheses are not allowed. Then, there are equivalent ways to ask this:
Given any set of $n-1$ non-multiples of $n$, can we make a multiple of $n$ using $+,-$?
Given any $n-1$ non-zero elements of $\mathbb Z/n\mathbb Z$, can we make $0$ using $+,-$?
Alternatively, we can also be asking to partition a $n-1$ element set $S$ into two subsets $S_+$ and $S_-$, such that the difference between the sum of elements in $S_+$ and the sum of elements in $S_-$ is a multiple of $n$ (is equal to $0$ modulo $n$).
For example, if $n=3$ then there are only $3$ (multi)sets we need to consider:
which are all solvable (we can make a $0$ in $\mathbb Z/n\mathbb Z$).
In general, there are $\binom{2n-3}{n-1}$ (multi)sets to consider for a given $n$.
My conjecture is that any such (multi)set is solvable if and only if $n$ is a prime number.
If $n$ is not prime, then it is not hard to see that this cannot be done for all (multi)sets. If $n$ is even, then take all $n-1$ elements to equal $1$, to build an unsolvable (multi)set. If $n$ is odd, then take $n-2$ elements to equal to a prime factor of $n$ and last element to equal to $1$, to build an unsolvable (multi)set.
It remains to show that if $n$ is prime, then all such (multi)sets are solvable.
I have confirmed this for $n=3, 5, 7, 11, 13$ using a naive brute force search.
Can we prove this conjecture? Or, can we find a prime that does not work?
Does anyone have an idea on how to find matrices $A, B, C, D$ either theoretically or numerically? I'm thinking that we can find a solution since there are four equations and four unknowns.
Hints would suffice. Thank you so much.
Update: I tried to run the first code in SAS but it gave me the following results:
What could I be doing wrong?
proc optmodel; num n = 4; set ROWS = 1..n; set COLS = ROWS; set CELLS = ROWS cross COLS; var A {CELLS} binary; var B {CELLS} binary; var C {CELLS} binary; var D {CELLS} binary; var Y {1..4, CELLS} >= 0 integer; /* M[i,j,k] = A[i,k]*A[k,j] */ var M {1..n, 1..n, 1..n} binary; con Mcon1 {i in 1..n, j in 1..n, k in 1..n}: M[i,j,k] <= A[i,k]; con Mcon2 {i in 1..n, j in 1..n, k in 1..n}: M[i,j,k] <= A[k,j]; con Mcon3 {i in 1..n, j in 1..n, k in 1..n}: M[i,j,k] >= A[i,k] + A[k,j] - 1; /* X[i,j,k] = B[i,k]*C[k,j] */ var X {1..n, 1..n, 1..n} binary; con Xcon1 {i in 1..n, j in 1..n, k in 1..n}: X[i,j,k] <= B[i,k]; con Xcon2 {i in 1..n, j in 1..n, k in 1..n}: X[i,j,k] <= C[k,j]; con Xcon3 {i in 1..n, j in 1..n, k in 1..n}: X[i,j,k] >= B[i,k] + C[k,j] - 1; /* O[i,j,k,l] = B[i,k]*C[k,l]*A[l,j] */ var O {1..n, 1..n, 1..n, 1..n} binary; con Ocon1 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: O[i,j,k,l] <= B[i,k]; con Ocon2 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: O[i,j,k,l] <= C[k,l]; con Ocon3 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: O[i,j,k,l] <= A[l,j]; con Ocon4 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: O[i,j,k,l] >= B[i,k]+C[k,l]+A[l,j] - 2; /* P[i,j,k,l] = A[i,k]*B[k,l]*C[l,j] */ var P {1..n, 1..n, 1..n, 1..n} binary; con Pcon1 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: P[i,j,k,l] <= A[i,k]; con Pcon2 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: P[i,j,k,l] <= B[k,l]; con Pcon3 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: P[i,j,k,l] <= C[l,j]; con Pcon4 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: P[i,j,k,l] >= A[i,k]+B[k,l]+C[l,j] - 2; /* A^2 + BC + BCA + ABC + A = I_4 */ con Con1 {<i,j> in CELLS}: sum {k in 1..n} M[i,j,k] + sum {k in 1..n} X[i,j,k] + sum {k in 1..n, l in 1..n} O[i,j,k,l] + sum {k in 1..n, l in 1..n} P[i,j,k,l] + A[i,j] = 2*Y[1,i,j] + (i=j); /* E[i,j,k] = A[i,k]*B[k,j] */ var E {1..n, 1..n, 1..n} binary; con Econ1 {i in 1..n, j in 1..n, k in 1..n}: E[i,j,k] <= A[i,k]; con Econ2 {i in 1..n, j in 1..n, k in 1..n}: E[i,j,k] <= B[k,j]; con Econ3 {i in 1..n, j in 1..n, k in 1..n}: E[i,j,k] >= A[i,k] + B[k,j] - 1; /* F[i,j,k,l] = B[i,k]*C[k,l]*B[l,j] */ var F {1..n, 1..n, 1..n, 1..n} binary; con Fcon1 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: F[i,j,k,l] <= B[i,k]; con Fcon2 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: F[i,j,k,l] <= C[k,l]; con Fcon3 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: F[i,j,k,l] <= B[l,j]; con Fcon4 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: F[i,j,k,l] >= B[i,k]+C[k,l]+B[l,j] - 2; /* G[i,j,k,l] = A[i,k]*B[k,l]*D[l,j] */ var G {1..n, 1..n, 1..n, 1..n} binary; con Gcon1 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: G[i,j,k,l] <= A[i,k]; con Gcon2 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: G[i,j,k,l] <= B[k,l]; con Gcon3 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: G[i,j,k,l] <= D[l,j]; con Gcon4 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: G[i,j,k,l] >= A[i,k]+B[k,l]+D[l,j] - 2; /* AB + BCB + ABD = 0 */ con Con2 {<i,j> in CELLS}: sum {k in 1..n} E[i,j,k] + sum {k in 1..n, l in 1..n} F[i,j,k,l] + sum {k in 1..n, l in 1..n} G[i,j,k,l] = 2*Y[2,i,j]; /* H[i,j,k] = C[i,k]*A[k,j] */ var H {1..n, 1..n, 1..n} binary; con Hcon1 {i in 1..n, j in 1..n, k in 1..n}: H[i,j,k] <= C[i,k]; con Hcon2 {i in 1..n, j in 1..n, k in 1..n}: H[i,j,k] <= A[k,j]; con Hcon3 {i in 1..n, j in 1..n, k in 1..n}: H[i,j,k] >= C[i,k] + A[k,j] - 1; /* Q[i,j,k,l] = D[i,k]*C[k,l]*A[l,j] */ var Q {1..n, 1..n, 1..n, 1..n} binary; con Qcon1 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: Q[i,j,k,l] <= D[i,k]; con Qcon2 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: Q[i,j,k,l] <= C[k,l]; con Qcon3 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: Q[i,j,k,l] <= A[l,j]; con Qcon4 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: Q[i,j,k,l] >= D[i,k]+C[k,l]+A[l,j] - 2; /* R[i,j,k,l] = C[i,k]*B[k,l]*C[l,j] */ var R {1..n, 1..n, 1..n, 1..n} binary; con Rcon1 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: R[i,j,k,l] <= C[i,k]; con Rcon2 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: R[i,j,k,l] <= B[k,l]; con Rcon3 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: R[i,j,k,l] <= C[l,j]; con Rcon4 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: R[i,j,k,l] >= C[i,k]+B[k,l]+C[l,j] - 2; /* CA + DCA + CBC = 0 */ con Con3 {<i,j> in CELLS}: sum {k in 1..n} H[i,j,k] + sum {k in 1..n, l in 1..n} Q[i,j,k,l] + sum {k in 1..n, l in 1..n} R[i,j,k,l] = 2*Y[3,i,j]; /* S[i,j,k,l] = D[i,k]*C[k,l]*B[l,j] */ var S {1..n, 1..n, 1..n, 1..n} binary; con Scon1 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: S[i,j,k,l] <= D[i,k]; con Scon2 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: S[i,j,k,l] <= C[k,l]; con Scon3 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: S[i,j,k,l] <= B[l,j]; con Scon4 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: S[i,j,k,l] >= D[i,k]+C[k,l]+B[l,j] - 2; /* T[i,j,k,l] = C[i,k]*B[k,l]*D[l,j] */ var T {1..n, 1..n, 1..n, 1..n} binary; con Tcon1 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: T[i,j,k,l] <= C[i,k]; con Tcon2 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: T[i,j,k,l] <= B[k,l]; con Tcon3 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: T[i,j,k,l] <= D[l,j]; con Tcon4 {i in 1..n, j in 1..n, k in 1..n, l in 1..n}: T[i,j,k,l] >= C[i,k]+B[k,l]+D[l,j] - 2; /* DCB + CBD = I_4 */ con Con4 {<i,j> in CELLS}: sum {k in 1..n, l in 1..n} S[i,j,k,l] + sum {k in 1..n, l in 1..n} T[i,j,k,l] = 2*Y[4,i,j] + (i=j); solve noobj with milp / maxpoolsols=100; print A; print B; print C; print D; quit;
An exercise (8-12) in Lee's Introduction to Smooth Manifolds involves showing that if $F : \mathbb{R}^2 \to \mathbb{RP}^2$ is given by $F(x,y) = [x,y,1]$, then there is a vector field on $\mathbb{RP}^2$ that is $F$-related to the vector field $X = x\partial/\partial y - y\partial/\partial x$ on $\mathbb{R}^2$.
I solved this problem as follows: We begin by letting $U_1,U_2,U_3 \subset \mathbb{RP}^2$ be the open subsets on which the first, second, and third coordinates, respectively, are nonzero, and let $(u_i,v_i) : U_i \to \mathbb{R}^2$ be the usual coordinate systems for each $i = 1,2,3$. We then define a smooth vector field $Y_i$ in coordinates on each $U_i$ as follows: \begin{align*} Y_1 &= (u_1^2 + 1)\frac{\partial}{\partial u_1} + u_1v_1\frac{\partial}{\partial v_1} \\ Y_2 &= -(u_2^2 + 1)\frac{\partial}{\partial u_2} - u_2v_2\frac{\partial}{\partial v_2} \\ Y_3 &= -v_3\frac{\partial}{\partial u_3} + u_3\frac{\partial}{\partial v_3}. \end{align*} It's then a straightforward computation with Jacobians to show that these three vector fields agree on intersections, and so they extend to a smooth global vector field $Y$ on $\mathbb{RP}^2$. One more computation shows that $Y$ is $F$-related to $X$. (I might have made a computational error here but that's beside the point.)
Despite having a formula for the vector field $Y$, I still have no intuitive grasp of what it actually looks like. $\mathbb{RP}^2$ is already a pretty abstract object, and how to imagine vector fields on it is a mystery to me---the above coordinate representations don't shed that much light on its structure. Is there a coordinate-independent way to define $Y$? I'm thinking maybe we can define a visualizable vector field on $\mathbb{R}^3 \setminus \{0\}$ that sinks through the quotient map $q : \mathbb{R}^3 \setminus \{0\} \to \mathbb{RP}^2$, but I don't know how the details would work out.
My question is about how to calculate the percentile of a list of numbers. I found on the Internet the formula:
$$p_i=100·\frac{i-0.5}{N}$$
Nevertheless, I don't understand the reason of -0.5. I mean, for example if I have the following ranked list of numbers:
$$1, 2, 4, 5, 100$$
In my opinion, 100 should be the 100%p and not:
$$p_5=100·\frac{5-0.5}{5} = 90\%$$
I am assuming that all the numbers have the same probability. In this way I'm having the same problem with another formula that is commonly used in this type of calculations:
My question pertains to the rules, and more specifically the ostensible violation of the rules on the modification of nouns into adjectives.
I sometimes experience difficulty of knowing when to adjective-ify nouns when (1) the adjective immediately precedes the noun in question and (2) the adjective version of the noun exists and is spelled differently.
Take the two examples I came across, surfing on the web:
a) "A journalistic career"
b) "A mathematics career"
Both phrases convey the same relative semantic idea and an almost identical construction bar one detail : in (a) the adjective version of journalist is deployed however in (b) the noun form of mathematics is left unaltered even though it acts as an adjective.
Is there a rubric or custom to distinguish when to adjective-ify nouns in these contexts? Or is it completely optional and both camps are valid?
Edit: another illustrative example : Is it correct to say "a gallery of cow pictures" or "a gallery of bovine pictures"?
I have found references in 1963 which seem to be the 'first published that google knows about', but it seems from this that the phrase was in fairly common use by then:see here and here.
The last sentence from the following paragraph from Dickens is ambiguious for me; "He was only twenty-five years old, he said, and had grown recently, for it had been found necessary to make an addition to the legs of his inexpressibles. At fifteen he was a short boy, and in those days his English father and his Irish mother had rather snubbed him, as being too small of stature to sustain the credit of the family. He added that his health had not been good, though it was better now; but short people are not wanting who whisper that he drinks too hard "
I want to know that whether both the sentence are same or one is wrong? "My test is the next day" or "My test is on next day"? I know perhaps it is a very basic question but still I want to know so please answer.
Is there a term for when someone asks a question but you know it's only because they either want you to ask them it back, or they want to answer it themselves?
Could you explain to me, please, what the expression "You are a story" means, used in the following dialogue:
A: "You mustn't pay any attention to old Addie," she now said to the little girl. B: "She's ailing today." A: "Will you shut your mouth?" said the woman in bed. "I am not." B: "You're a story."
Clothing more conventionally worn by the opposite sex, esp. women's clothes worn by a man: a fashion show, complete with men in drag | [as adj.] a live drag show
Now that second bit saying that it is an adjective raises my eyebrows a bit, because that example doesn't look like an adjective to me. I parse that as
a live drag show det. adj. noun compound
And I feel this interpretation is vindicated if we contrast sentences like
The show will be so extravagant.
with
*The show will be so drag.
Here I replaced an adjective (extravagant) describing a "show" (the same noun in the sentence provided by the dictionary) with "drag" and got a sentence that seems to me to be ungrammatical. If "drag" were being used as an adjective in the example sentence then it should be separable from the noun it modifies, and it seems to not be. And this evidence is completely congruent with a noun compound understanding of the phrase.
Now to be a little less naïve, I do definitely see how one could think that drag is an adjective in the sentence. My perspective is quite different from a monolingual English speaker who did not spend a large amount of time in school diagramming sentences. It is a word right before a noun that modifies it, which is basically how adjectives work most the time. And I could definitely see it being a pragmatic choice on the part of the dictionary to cater towards a simpler more approachable understanding. However the issue I see with this is that English allows noun compounding with basically all nouns. And the NOAD does not list every noun this way. So there must be something special about the word it is trying to tell me, but I don't know what it is.
So my questions here are: What's going wrong? Have I misread the dictionary? What is the dictionary trying to tell me.
I was wondering if the following statements mean the same.
There's no dessert like this.
There's no such dessert as this.
It seems obvious to me that the second one could mean something like: There's no such thing as this in the world of desserts or I've never seen it in the world of desserts.
The first one, however, sounds ambiguous to me because it could not only mean the same as the other, but also mean like: This is the best dessert I've ever enjoyed.
I know "like" implies comparison. Does the first sentence sound ambiguous to you too or does it only have one meaning?
"We'll have this boat fixed." Doesn't it sound like they are gonna employ someone do this job? But here is the thing they are themselves doing this job. So what does it mean?
Looking at USA Google Trends for "A F", "A. F." and "as fuck" show "A F" has been used for something (possibly Air France?) since at least 2004, which is pre-Twitter, but post MySpace. It also shows an uptick in "as fuck" from around November 2009, which might coincide with the introduction of "a f" as an intensifier. The "a. f." line is pretty low.
This seems impossible to search for in Google Books, because A F are initials. Using the ngram viewer with "A F_ADVERB" gets no hits.
It doesn't appear in the OED online, and Greens Dictionary of Slang lumps it in with "as fuck". I'm at a loss for where to look.
Is there any evidence out there that "A F" was coined before the advent of social media? Perhaps in military slang?
Is there a standard word to describe something a seller does to secure a sale, particularly an add-on service or package? Like when a car dealer adds a package for new wheels or detailing or a dedicated service support line as encouragement to close soon. A "closer"? "Sweetener"? "White glove"? Value-added service?
Is there a word for when something looks correct when wrong? For instance in art, drawing something that technically would be wrong in reality, but drawing it correctly actually looks wrong and drawing it wrong looks correct.
I used to perform magic and I thought there was a term for this.
'Viroj, his wife, Pranom, Joan and I were duly ushered into an audience room at Chitralada Palace.'
Viroj's wife is Pranom so Pranom is set off with commas as a non-restrictive appositive (Viroj has only one wife). Thus there are four people going to the palace. However, if you do not know that Viroj's wife is Pranom, then you could read the sentence as there being five people going to the palace.
Should I separate the names with semi-colons as so:
'Viroj; his wife, Pranom; Joan; and I were duly ushered into an audience room at Chitralada Palace.'
It looks a little odd to me but I believe it is correct?
I grew up hearing the phrase, "You're a better man than I am, Gunga Din!" used as a compliment, a genuine expression of admiration, fairly self-effacing at the same time.
I have to admit that, while I knew from context that it was meant as praise, I long ago forgot most of the poem it came from, remembering just that Gunga Din was heroic on the battlefield. Hence the admiration.
I was about to use the phrase when I realized that the person I was addressing might be too young to get the reference, so I skipped it, but went back to read the poem. It is (to me) shockingly racist, with lines like
An' for all 'is dirty 'ide 'E was white, clear white, inside When 'e went to tend the wounded under fire!
Researching it a bit, it seems the poem is not taught anymore, much like some of Mark Twain's works in the US.
So, is it still a compliment or have the racist overtones made it obsolete?
Edited to add: The last stanza refers to meeting up with Gunga Din in hell someday. [Again edited to add] I realize that the meeting in hell was a compliment - once again - to Gunga Din. The author calls him, "You Lazarushian-leather Gunga Din!" In the Bible, the Rich man (in hell) asks to let Lazarus (in heaven) give him water: 'Father Abraham, have pity on me and send Lazarus to dip the tip of his finger in water and cool my tongue, because I am in agony in this fire.' While the Biblical answer is 'Nope', the author has so much faith in the goodness of Gunga Din that he believes Gunga Din will bring him - and others - water not only on the battlefield, but also in hell. (I think...) Thanks to @Michael.
I've been wondering if "That is what….," and "This is what ….," in the following passage (taken from Fieldfish.com) can be used interchangeably.
Imagine you are an unmarried couple who have been trying to conceive for years. With the help of a well-established fertility clinic and donor sperm you undergo IVF treatment, and have your much desired child. In the course of the fertility process you are told both parents need to sign consent forms that once signed will confer on both the biological parent and the non-biological parent, the same rights of parentage without needing to go to court after the birth to get a declaration of parental responsibility, nor adoption orders. Then some months later the clinic calls you to tell you that due to an admin error, the forms were not completed correctly and the non-biological parent is not legally the child's parent, and probably the only solution is to go through the adoption process.
That is what happened to many couples in the UK who have had fertility treatment using donor sperm and eggs. This is what happened to a family in 2013 and it prompted the Human Fertilisation and Embryology Authority (the HFEA) to require all clinics to audit their cases to see whether there were any other failures by clinics of having failed to get the family to sign both consent form or, having lost or misfiled these legal consents.
"That" and "This" here seem to be almost the same. It is my understanding that "that" indicates a previously stated idea and "this" suggests the idea and something new about it. Is that correct? How does that affect how they are being used in the above passage?
To stick expresses more than to cleave, and cleave than adhere: things are made to stick either by incision into the substance, or through the intervention of some glutinous matter; they are made to cleave and adhere by the intervention of some foreign body: what sicks, therefore, becomes so fast joined as to render the bodies inseparable; what cleaves and adheres is less tightly bound, and more easily separable.
Two pieces of clay will stick together by tho incorporation of the substance in the two parts; paper is made to stick to paper by means of glue: the tongue in a certain state will cleave to the roof: paste, or even occasional moisture, will make soft substances adhere to each other, or to hard bodies.
What does "the tongue in a certain state will cleave to the roof" mean?
S1. X can be done to handle the unsavory practice by Y, which limits growth.
S2. X can be done to handle the unsavory practice, which limits growth, by Y.
In this sentence the descriptive clause "which limits growth" is supposed to apply to the unsavory practice. Does that mean S1 usage is incorrect?
Question update:
What's the best way to rewrite or express the idea that the non-restrictive clause applies to unsavory practice?
I see both S1 and S2 confusing and not easy to read. Furthermore, this problem seems to be very common whenever some X has both a descriptive thing and a restrictive clause and you want to express it in just one sentence. For example:
John grew up with a brother who worked in construction and was John's only healthy sibling, and another brother who worked in government.
"who worked in construction" is restrictive clause. "John's only health sibling" is non-restrictive.
Another way to rewrite it is:
John grew up with a brother, John's only healthy sibling, who worked in construction, and another brother who worked in government.
Both of these ways to express the idea are clumsy. Any better way?
I'm searching for an idiom (in a negative sense) that means that a group of people have different opinions, so it's difficult for them to solve a problem, to decide on something or agree on something. Example:
They couldn't decide where to go, because everyone had a different opinion.
Since the members of the political party have different opinions about its name, we'll have to wait before designing the campaign.
I came across a new phrase while reading description section of a webinar topic on Operational Best Practices in the Cloudhere.
Excerpt:
Don't pave the cow path. Cloud infrastructure is very different from traditional infrastructure and requires different approaches to really harness cloud value. From dev/test/prod lifecycle management to deployment automation, patch management, monitoring and automation for autoscaling and disaster recovery...
What does don't pave the cow path mean, in general and in this context?
Initially i have problem with: ssh_host_rsa_key invalid format...
i fixed with
I use cent os 8.
I start delete some project whos actually was running.
After desconnection i cant any more to connect with ssh . I already have key host implemented.
On connect :
ssh: connect to host MYDOMAIN.com port 22: Connection timed out
On the end i remove from digital ocean setting security keys… I also change in sshd_config PasswordAuthentication no ir yes and PubkeyAuthentication yes or no
Still no connect with ssh ....
My only alive options is recovery digital ocean web console...
When I plug in my headphone into the (only, so pobably combined out/in) jack in my laptop in Ubuntu 20.04 the config switches output to headphone (I like that) and mic to the non-existing headphone mic (not so).
I can switch the mic back to internal laptop mic in the settings, but it's quite tiring since I plug/unplug them often.
The jack of the headphone has three sections (not four as proper headsets have).
How do I prevent switching to the non-existing mic?
I'm seriously considering ways to safeguard my data since I'm really tired of loosing files because of bad redundancy.
I was reading about RAID1 and it seems a very practical method of keeping updated backups, though I'm worried about complexity, security, flexibility and portability.
By complexity I mean that I'm adding another logical layer to the filesystem, so in case that anything goes wrong, maintaining and securing both layers may result in added complexity.
How do you rescue data from a RAID1 system? Is it more complex than a normal ext4?
By security I mean that redundancy is double sharped because an error in the upper ext4 filesystem would affect both drives simultaneously because both drives are readed and written at the same time. Is there any means to prevent this?
By flexibility I mean, what happens if one of the mirrored drives breaks up and I'm not able to afford another drive? can I maintain a RAID1 system with only one drive indefinitely? also is it possible to simplify a raid1 system back to a normal ext4 partition?
How do you rescue data from a RAID1 system? Is it more complex than a normal ext4?
Is there any way to delay mirroring in a RAID1 system?
can I maintain a RAID1 system with only one drive indefinitely? also is it possible to simplify a raid1 system back to a normal ext4 partition?
By portability I mean how to move a RAID filesystem in between computers?
thank you for viewing this question. I have an SD image which contains 4 partitions:
U-boot and sys volume info
Only contains sys volume info
Empty
Root file system
After running file at the U-boot kernel I can tell the kernel version is 4.14.24-rt19-stable.
My question is, can I download the kernel and build a boot disk to emulate it with QEMU having only these files available? I know the original target processor is cortex-a7.
Is there a password manager with fingerprint support?
I use Fedora KDE.
I know that fingerprints might be somewhat insecure. Still, a password manager with fingerprint support is useful. It can be used for logins which are not too important. Like forums and StackExchange, where losing your password to a state would not be the end of the world in any sense.
My T400 laptop has an internal wifi adapter. It sometimes works and sometimes don't. I use an external wifi adapter, and want to disable the internal one so that it won't suddenly work again and suspiciously interfere with other things.
How can I find out which module(s) to unload?
Here is the current output, with the external wifi adapter working and the internal one not:
I am quite new to regex, and thought markdown(custom/madeup format of maardown) to html conversion using regex would be a good way to get started. When I'm within Vim(8.1), say converting this line;
I got two Linux/Ubuntu 18.04 Lts. machines. First one is the host(SSHD) and has a VM machine installed on it(virtual machine's ethernet is configured as NAT: Qemu/KVM - virtualization). Simple SSH connection between host and VM on it in NAT regime works perfectly: ssh user@ip.address > pass
First machine is connected to router via LAN and second machine is a ssh-client connected to a router via Wifi.
Is there any solution to access VM(on host machine) with machine that is connected to wifi network only? I'm pretty new with Unix/Linux, so I would really appreciate your support with this case.
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.122.x netmask 255.255.255.0 broadcast 192.168.122.255
P.S. I would prefer not to switch NAT to bridge regime. If you need more data, I will gladly provide you with some more details.
I need to display count of number of line which content number with no including alphabet. And also display these no. in separate lines. Below given is the content of example.txt file:
Electronic mail is a method of exchanging digital messages between computer users; such messaging first entered substantial use in the 1960s and by the 1970s had taken the form now recognised as email. These are spams email ids: 08av , 29809, pankajdhaka.dav, 165 . 23673 ; meetshrotriya; 221965; 1592yahoo.in praveen_solanki29@yahoo.com tanmaysharma07@gmail.com kartikkumar781@gmail.com arun.singh2205@gmail.com sukalyan_bhakat@us.in.y.z These are incorrect: 065 kartikkumar781r2# 1975, 123
The output of shell file is given below, somebody suggest how can I do this...
Output Number of lines having one or more digits are: 4 Digits found: 29809 165 23673 221965 065 1975 123
I got an issue that is really bugging me for the past few days.
I am trying to boot into Ubuntu system 20.04 LTS. However, every time I turn on, for about 5 seconds loading something it then suddenly shut down. It then automatically restart, loading and shutdown .. continue in a loop.. However, if I go into Grub, I can log into the recover mode and get access the root command line interface.
Can anyone help me troubleshoot this strange thing.
The following are the only things that are set up in my computer build right now:
Motherboard: Z590 MSI PRO WIFI
CPU: i9-10850k and a CPU fan
RAM: a pair of 16 gb
M2: samsung 980 pro nvme
power supply: EVGA 700 GD, 80+ GOLD 700W
UPDATE:
Currently, What I have tried which can log into the OS and even the user interface is to edit the Ubuntu option in GRUB. However, only before 5 minutes the computer shuts down itself.
Specifically, I added "$vt_handoff nomodeset". When loading, I found that it keeps saying that Bluetooth hc10: reading intel version information failed (-22)
I'm trying to make a usb drive with two linux distros installed inside. The idea is to carry the usb drive with me and boot the distros in the computer available at the place where I am at that moment. I know that this might be a bad practice, but just wanna give it a try.
I tried to install Kali Linux distro to my usb drive according to this video, in which a VirtualBox VM is used to install the O.S. into a usb drive.
When I first tried I didn't booted my VM in EFI mode and so the O.S. was installed in legacy mode (boot instructions written in the MBR). All was OK as long as I booted on my pc supporting legacy boot, but when I tried to boot from my Microsoft Surface (which doesn't support legacy boot), I obviously couldn't boot from the external drive.
So I tried to reinstall Kali with EFI mode activated on the VM, but I wasn't lucky this time either and didn't manage to boot the distro on my Surface. The situation was the same as when I tried to boot from my Surface having the distro installed in legacy mode: the Surface didn't recognize the bootable usb drive at all.
Googling I found a bunch of solutions to install/reinstall GRUB to an usb/external drive, but when I tried them, it seemed they were working only as long as I booted on the same device which I used to install GRUB on the usb drive. As an example, when I used the VirtualBox VM to install GRUB into my usb device, I was able to boot my Kali distro into the usb device ONLY from that VirtualBox VM.
I think I'm missing something here... Can someone give me an hand to clarify and, maybe, solve?
I attach screenshots describing the partitioning of my usb drive and the content of the ESP partition on the USB drive after the installation in EFI mode of Kali Linux, in case they can help:
--- UPDATE ---
I managed to boot my system on BOTH my PCs capable of EFI boot. I just moved the Kali boot loader located in my ESP from /EFI/Kali to the fallback path /EFI/BOOT and renamed the bootloader from "grubx64.efi" to the fallback name "bootx64.efi". I don't know why the boot process didn't manage to boot /EFI/Kali/grubx64.efi do someone has any clues?
Now I only got to make everything bootable in legacy mode (aka using BIOS), is it possible? It seems to be possible to boot a usb drive both UEFI and legacy mode, but is there a way I can set everything up without messing anything working in my actual EFI bootable configuration?
I have a text file with 3 columns of data. However at random times in the various files there is a change in the in the observed unit from ppn to ppb resulting in the need of a conversion factor and multiplication by 1000.
actual data needed data look 20101001,01:00,0.3 20101001,01:00,0.3,300.000 20110103,10:00,212.67 20110103,10:00,212.670,212.670
I have a awk command to print all original and add a fourth column with the conversion.
The only issue is it prints everything everything in the third column by 1000 and print to the fourth column. The command is below....
I have 2 files with * delimiter, each file with 3k records.
There are common fields in different positions. In file1(count=1590) the position is 1 and in file2(2707) the position is 2. file2 count and output count should be same. Note: in file2 2nd position numbers will be present in file1 we need to take corresponding $3 value which is 1 or 0
In both files total count was 3k, both files were * delimter, in that file1 $1 and file2 $2 was common field for both files, we need check whether common field has 0 or 1 which present in file1 $3. we need to write the file like 1==>000000001D0560020011 2==>000000003D0792917850, $1=seqno,$2=matched9digit value follwed byD and $3 whether is 0 or 1
All $2 values from file2 will be present as $1 values in file1.
I am able to mount the NS from my Rock Pi N10 running Debian (buster) using the following command:
sudo mount.cifs //<<ip.address>>/SHARE /mnt/lspro
But on my PC running Ubuntu 18.04, using the exactly the same command as above, I got an error:
mount error(2): No such file or directory Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
The dmesg logs are:
[48381.426142] CIFS: Attempting to mount //10.1.10.77/share [48381.426168] No dialect specified on mount. Default has changed to a more secure dialect, SMB2.1 or later (e.g. SMB3), from CIFS (SMB1). To use the less secure SMB1 dialect to access old servers which do not support SMB3 (or SMB2.1) specify vers=1.0 on mount. [48381.440240] CIFS VFS: cifs_mount failed w/return code = -2
/mnt/lspro exists on the Ubuntu box. I can cd /mnt/lspro and ls /mnt/lspro it's confirmed it there!
I can even mount the NS through Files other location by "smb://<ip.address>" using Anonymous without password, but I cannot mount.cifs in the Ubuntu box.
Does anyone have the same situation and found a solution?
It is well known that it's a bad idea to do something of the kind <command> $FILENAME, since you can have a file whose name is for example -<option> and then instead of executing <command> with the file -<option> as an argument, <command> will be executed with the option-<option>.
Is there then a general safe way to accomplish this? One hypothesis would be to add -- before the filename, but I'm not sure if that is 100% safe, and there could be a command that doesn't have this option.
Need help to create an iptable rule which will redirect all request of ip range 172.16.0.1 to 172.16.0.120 with port range 20-8081 to localhost service listening on port 22215, but this rule should not catch ip 172.16.0.111 with port 443 (i.e., 172.16.0.111:443 should directly access through internet).
After applying the above rule all the request which has ip and port in the above range are redirected to 127.0.0.1:22215. But I am not getting how to exclude ip 172.16.0.111 having port 443.
curl //website// will get me the source code but from there how would I filter our every unique path and obtain the number of them?
the question:
Use cURL from your machine to obtain the source code of the "https://www.inlanefreight.com" website and filter all unique paths of that domain. Submit the number of these paths as the answer.
from the question, I do not know the meaning of "UNIQUE PATHS", but I think it means something similar to what you get from executing $wget -p
I cannot capture audio using an external microphone on my Linux Mint 20 Ulyana system. In Sound Settings under the Input tab, when I select the external microphone from the Device list, the Input level shows zero/nil, irrespective of the Volume slider position.
However, I can still use the internal microphone without any issue.
Things I have tried so far
I have tried three different external microphones, including a bluetooth device. I have eliminated the possibility of an issue with Microphone jack and the microphone pin(s).
Checked whether the device is muted in alsamixer. It looks like the device is unmuted, as I did not see an MM under the device.
How can I get the full filename of a script in bash? (Similar to python's __file__ variable)
something like
#!/bin/bash # my-script.sh # would evaluate to /dir/my-script.sh thisfile=$(get-name-for-this-script) # Do something with some other local file cp $(dirname $thisfile)/something.txt .
I have a tiny application written in go and I've cross-compiled it to various operating systems.
Currently my Makefile generates myapp-VERSION-OS-CPUARCH.tar.gz packages to be used as a source binary packages for to be released as .deb, .rpm, PKGBUILD, FreeBSD binary release .tgz and so on with a structure like so:
bin/myapp LICENSE README.md
I can't find tutorials/howtos/examples on how to package this into official OpenBSD .tgz binary release package(s). pkg_create seems to be the command, but I can't find examples.
So how you make the binary release package on OpenBSD so that there's all the metadata such as maintainer, application category, architecture and such?
The idea here is not getting the package to any official ports repository. It's to simply package a release for your own machine and learning about the packaging process on OpenBSD.
Is there a practical difference between history-substring-search-up and up-line-or-beginning-search? I've tried both out and they effectively seem to do the same thing (besides some highlighting that history-substring-search does).
I was investigating further an issue already reported here. The problem is: after having upgraded the hplip driver to 3.16.2, the scanner in my all-in-one printer HP Color LaserJet Pro MFP M277dw does not work any longer (while the printer does). Today I found other oddities that seem specific to hplip rather than to sane, whereby this other post.
I use Ubuntu Linux 14.04 LTS. In all that follows the device is connected and powered-on. The hplip page for that device is here.
warning: CUPSEXT could not be loaded. Please check HPLIP installation.
b. If run hp-doctor, the welcome message is
error: This distro (i.e ubuntu 14.04) is either deprecated or not yet supported.
This sounds utterly odd to me, because the previous hplip did not dare to complain this far of the very same distro. The complete output of hp-doctor is available from here on Paste Ubuntu.
c. Ever more puzzling, if I open the HP device manager, I am presented with the window
which seems a false statement to me, since the device works as a printer at the very least. If I click on Setup device... I get again the same dialogue window. And CUPS on localhost:631 indeed confirms that the printer is there ready to be found, nice and idle.
Questions
Is there a way to have the commands hp-setup and hp-doctor run smoothly so that I can fix the scanner issue down the line?
If not, how do I downgrade the hplip driver to the previous stable version? Installing 3.16.2 has led to more havoc than joy.
I've recently installed Linux Mint 17, replacing my ageing Mint 13 installation. I've got a 16:9 screen. In case it matters, my graphics card is an NVidia GeForce 210.
Now back in Mint 13, if a game switched to a 4:3 mode, I got it displayed in the correct aspect ratio, with black bars left and right. However now they are deformed to fill the full screen, which is annoying because it no only looks terrible, but also destroys angles and therefore affects gameplay.
I then also checked explicitly switching to a 4:3 mode (using the "Monitors" settings dialog), and again it deformed the image. I also checked my monitor's setting that it is indeed still set to keep the aspect ratio. Indeed, going into the monitor's menu tells me that the screen still gets a 1920x1080 signal. Therefore I conclude that it's a Linux/X11/graphics driver issue.
I'm using the Nouveau driver. In Mint 13 I used the proprietary NVidia driver; that could make a difference. However I cannot imagine that there's no way to get the correct aspect ratio also with Nouveau.
Therefore my question is: What do I have to do to get 4:3 modes (or, more generally, non-16:9 modes) displayed in the correct aspect ratio on a 16:9 monitor (without affecting the 16:9 modes, obviously)?
It is possible to setup an SSH port forward where the ssh client prints out the traffic exchanged over the ssh port to the screen or a file.
I am trying to debug a problem and want to see what is being sent between a java process running on my local machine and a remote process running on Solaris. I am using the port forwarding via ssh so that i can step through the java program. Normally I would have to copy the .java files to the Solaris machine, build them and run and it is not very productive way to debug, thus the port forwarding. The client and server as using IIOP protocol so I can't use an http proxy to monitor the traffic.
I'm using a tiling window manager and I switched from gnome-terminal with multiple tabs to multiple urxvt instances managed by the window manager. One of the features I miss is the ability to open a new terminal that defaults to the working directory of the last one.
In short: I need a way to open a new urxvt (bash) that defaults to $PWD of the last used one.
The only solution that comes to my mind is to save the current path on every cd with something like this:
echo $PWD > ~/.last_dir
and restore the path on the new terminal in this way:
cd `cat ~/.last_dir`
I can source the second command in .bashrc but I don't know how to execute the first one on every directory change :)
Any simpler solution that does not involve screen or tmux usage is welcome.
I'm using docker-compose (on windows) to bring up a mongoDB along with a couple of nodeJS processes (on a remote CentOS machine via SSH). The nodeJS containers are supposed to have the code in my project directory mounted into /home/node/app so they can execute it with the command node web/run. However, when I use docker context to deploy this group of containers to my remote host via SSH, I get an error saying the script at /home/node/app/web/run is not found, suggesting my code was not copied into/mounted into the container.
You can see that I'm mounting the current directory (my project) below:
I need to add a new column to my dataframe (Titanic dataset) call Range, with the range of every passenger on the Titanic, following this table:
Kids 11 years Young 18 years Adult 50 years Old 50 years
I created a new column and full it with NaN. Then, I have tried a loop to itinerate the age and replace the value of the column, but the column fills all the rows with 'Adult'. Why can this be happening?
for i in df["Age"]: if (i < 11.0): df['Range'].replace(['NaN'],'Niño') elif (i < 18.0): df['Range'].replace(['NaN'],'Joven') elif (i < 50.0): df['Range'].replace(['NaN'],'Adulto') elif (i >= 50.0): df['Range'].replace(['NaN'],'Mayor')
I'm currently processing a form with Formik and Yup that contains more than 700 complex entries. I want to print exactly the item which was affected. E.g: If item #548 was invalid, I'd like to get the item index (which should be 547) and then print it out to the user.
I tried using ${path} interpolation in Yup, which almost does what I want, but I'd like to get only the index or be able to transform the output before it goes out of Yup (Or I'd have to modify the other components, and I don't want it to feel like a hack).
Here's my schema:
const toGradeSchema = yup.lazy((_toGrade: any) => { const toGrade = _toGrade as ToGradeInput | undefined; const isGradeSet = toGrade?.gradeId || toGrade?.section; const gradeId = yup .string() .test('gradeId', 'Debe de colocar el grado', (val?: any) => { return (!isGradeSet && !val) || (val && isGradeSet); }); const section = yup .string() .oneOf(sectionList) .test('gradeId', 'Debe de colocar la sección', (val?: any) => { return (!isGradeSet && !val) || (val && isGradeSet); }); const base = yup.object().shape({ gradeId, section, }); return base; }); const newStudentsSchema = yup.array().of( yup .object() .shape({ name: yup.object().shape({ firstName: yup .string() .required('El primer nombre es requerido ${path}'), lastName: yup.string().required('El apellido es requerido'), fullName: yup.string().notRequired(), }), email: yup .string() .email('El correo electrónico es inválido') .required('Debe de colocar un correo electrónico ${path}'), password: yup .string() .required(() => 'Debe de colocar una contraseña ${path}') .test( 'password is valid', 'La contraseña debe de contener por lo menos una letra en mayúscula, una en minúscula y al menos un dígito', (pass: string | undefined | null) => { if (!pass) { return false; } return validatePassword(pass); }, ), gender: yup.mixed().oneOf(genderValues), // The order in the attendance that must not change birthDate: yup.date().notRequired(), allergies: yup.string().notRequired(), diseases: yup.string().notRequired(), toGrade: toGradeSchema, }) .notRequired(), ); const existingStudentSchema = yup.array().of( yup.object().shape({ studentId: yup.string().required('Debe de colocar el ID del estudiante'), toGrade: toGradeSchema, }), ); export const bulkStudentCreateSchema = yup.object().shape({ students: yup.object().shape({ newStudents: newStudentsSchema, existingStudents: existingStudentSchema, }), });
I am working on a project The current code and document i have only send an email of what the data in the row is.
I would like that the data in columns B - G be merged to a PDF and then it is sent with the email template to an email address in column H.
I am currently stuck on how to add the code for creating the PDF Here is the Link to the folder with the google sheets data document FOLDER LINK
below is the current code
function sendEmails() { var ss = SpreadsheetApp.getActiveSpreadsheet(); var dataSheet = ss.getSheets()[0]; var dataRange = dataSheet.getRange(2, 1, dataSheet.getMaxRows() - 1, 4); var templateSheet = ss.getSheets()[1]; var emailTemplate = templateSheet.getRange("A1").getValue(); // Create one JavaScript object per row of data. objects = getRowsData(dataSheet, dataRange); // For every row object, create a personalized email from a template and send // it to the appropriate person. for (var i = 0; i < objects.length; ++i) { // Get a row object var rowData = objects[i]; // Generate a personalized email. // Given a template string, replace markers (for instance ${"First Name"}) with // the corresponding value in a row object (for instance rowData.firstName). var emailText = fillInTemplateFromObject(emailTemplate, rowData); var emailSubject = "Tutorial: Simple Mail Merge"; MailApp.sendEmail(rowData.emailAddress, emailSubject, emailText); } } // Replaces markers in a template string with values define in a JavaScript data object. // Arguments: // - template: string containing markers, for instance ${"Column name"} // - data: JavaScript object with values to that will replace markers. For instance // data.columnName will replace marker ${"Column name"} // Returns a string without markers. If no data is found to replace a marker, it is // simply removed. function fillInTemplateFromObject(template, data) { var email = template; // Search for all the variables to be replaced, for instance ${"Column name"} var templateVars = template.match(/\$\{\"[^\"]+\"\}/g); // Replace variables from the template with the actual values from the data object. // If no value is available, replace with the empty string. for (var i = 0; i < templateVars.length; ++i) { // normalizeHeader ignores ${"} so we can call it directly here. var variableData = data[normalizeHeader(templateVars[i])]; email = email.replace(templateVars[i], variableData || ""); } return email; } ////////////////////////////////////////////////////////////////////////////////////////// // // The code below is reused from the 'Reading Spreadsheet data using JavaScript Objects' // tutorial. // ////////////////////////////////////////////////////////////////////////////////////////// // getRowsData iterates row by row in the input range and returns an array of objects. // Each object contains all the data for a given row, indexed by its normalized column name. // Arguments: // - sheet: the sheet object that contains the data to be processed // - range: the exact range of cells where the data is stored // - columnHeadersRowIndex: specifies the row number where the column names are stored. // This argument is optional and it defaults to the row immediately above range; // Returns an Array of objects. function getRowsData(sheet, range, columnHeadersRowIndex) { columnHeadersRowIndex = columnHeadersRowIndex || range.getRowIndex() - 1; var numColumns = range.getEndColumn() - range.getColumn() + 1; var headersRange = sheet.getRange(columnHeadersRowIndex, range.getColumn(), 1, numColumns); var headers = headersRange.getValues()[0]; return getObjects(range.getValues(), normalizeHeaders(headers)); } // For every row of data in data, generates an object that contains the data. Names of // object fields are defined in keys. // Arguments: // - data: JavaScript 2d array // - keys: Array of Strings that define the property names for the objects to create function getObjects(data, keys) { var objects = []; for (var i = 0; i < data.length; ++i) { var object = {}; var hasData = false; for (var j = 0; j < data[i].length; ++j) { var cellData = data[i][j]; if (isCellEmpty(cellData)) { continue; } object[keys[j]] = cellData; hasData = true; } if (hasData) { objects.push(object); } } return objects; } // Returns an Array of normalized Strings. // Arguments: // - headers: Array of Strings to normalize function normalizeHeaders(headers) { var keys = []; for (var i = 0; i < headers.length; ++i) { var key = normalizeHeader(headers[i]); if (key.length > 0) { keys.push(key); } } return keys; } // Normalizes a string, by removing all alphanumeric characters and using mixed case // to separate words. The output will always start with a lower case letter. // This function is designed to produce JavaScript object property names. // Arguments: // - header: string to normalize // Examples: // "First Name" -> "firstName" // "Market Cap (millions) -> "marketCapMillions // "1 number at the beginning is ignored" -> "numberAtTheBeginningIsIgnored" function normalizeHeader(header) { var key = ""; var upperCase = false; for (var i = 0; i < header.length; ++i) { var letter = header[i]; if (letter == " " && key.length > 0) { upperCase = true; continue; } if (!isAlnum(letter)) { continue; } if (key.length == 0 && isDigit(letter)) { continue; // first character must be a letter } if (upperCase) { upperCase = false; key += letter.toUpperCase(); } else { key += letter.toLowerCase(); } } return key; } // Returns true if the cell where cellData was read from is empty. // Arguments: // - cellData: string function isCellEmpty(cellData) { return typeof(cellData) == "string" && cellData == ""; } // Returns true if the character char is alphabetical, false otherwise. function isAlnum(char) { return char >= 'A' && char <= 'Z' || char >= 'a' && char <= 'z' || isDigit(char); } // Returns true if the character char is a digit, false otherwise. function isDigit(char) { return char >= '0' && char <= '9'; }
I am looking at the following piece of x86 assembly code (Intel syntax):
movzx eax, al and eax, 3 cmp eax, 3 ja loc_6BE9A0
In my understanding, this should equal something like this in C:
eax &= 0xFF; eax &= 3; if (eax > 3) loc_6BE9A0();
This does not seem to make much sense since this condition will never be true (because eax will never be greater than 3 if it got and-ed with 3 before). Am I missing something here or is this really just an unnecessary condition?
And also: the movzx eax, al should not be necessary either if it gets and-ed with 3 right after that, is it?
I am asking this because I am not so familiar with assembly language and so I am not entirely sure if I am missing something here.
I am quite new to React native. I was trying to run the emulator but it shows "failed to launch emulator" and I am not sure how to resolve the error. What am I supposed to do to resolve the error? ..................................................................................................................................................................................................................
D:\hiban_work\react\AwesomeProject>npx react-native run-android info Running jetifier to migrate libraries to AndroidX. You can disable it using "--no-jetifier" flag. Jetifier found 903 file(s) to forward-jetify. Using 4 workers... info Starting JS server... 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. info Launching emulator... 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. 'C:\Users\Hanim' is not recognized as an internal or external command, operable program or batch file. error Failed to launch emulator. Reason: Could not start emulator within 30 seconds.. warn Please launch an emulator manually or connect a device. Otherwise app may fail to launch. info Installing the app... Starting a Gradle Daemon (subsequent builds will be faster) > Task :app:checkDebugDuplicateClasses FAILED > :app:checkDebugAarMetadata > Resolve files of 16 actionable tasks: 14 executed, 2 up-to-date FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':app:checkDebugDuplicateClasses'. > Could not resolve all files for configuration ':app:debugRuntimeClasspath'. > Failed to transform swiperefreshlayout-1.0.0.aar (androidx.swiperefreshlayout:swiperefreshlayout:1.0.0) to match attributes {artifactType=enumerated-runtime-classes, org.gradle.category=library, org.gradle.libraryelements=jar, org.gradle.status=release, org.gradle.usage=java-runtime}. > Execution failed for AarToClassTransform: C:\Users\Hanim Omer\.gradle\caches\modules-2\files-2.1\androidx.swiperefreshlayout\swiperefreshlayout\1.0.0\4fd265b80a2b0fbeb062ab2bc4b1487521507762\swiperefreshlayout-1.0.0.aar. > error in opening zip file * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. * Get more help at https://help.gradle.org BUILD FAILED in 4m 56s error Failed to install the app. Make sure you have the Android development environment set up: https://reactnative.dev/docs/environment-setup. Error: Command failed: gradlew.bat app:installDebug -PreactNativeDevServerPort=8081 FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':app:checkDebugDuplicateClasses'. > Could not resolve all files for configuration ':app:debugRuntimeClasspath'. > Failed to transform swiperefreshlayout-1.0.0.aar (androidx.swiperefreshlayout:swiperefreshlayout:1.0.0) to match attributes {artifactType=enumerated-runtime-classes, org.gradle.category=library, org.gradle.libraryelements=jar, org.gradle.status=release, org.gradle.usage=java-runtime}. > Execution failed for AarToClassTransform: C:\Users\Hanim Omer\.gradle\caches\modules-2\files-2.1\androidx.swiperefreshlayout\swiperefreshlayout\1.0.0\4fd265b80a2b0fbeb062ab2bc4b1487521507762\swiperefreshlayout-1.0.0.aar. > error in opening zip file * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. * Get more help at https://help.gradle.org BUILD FAILED in 4m 56s at makeError (D:\hiban_work\react\AwesomeProject\node_modules\execa\index.js:174:9) at D:\hiban_work\react\AwesomeProject\node_modules\execa\index.js:278:16 at processTicksAndRejections (node:internal/process/task_queues:94:5) at async runOnAllDevices (D:\hiban_work\react\AwesomeProject\node_modules\@react-native-community\cli-platform-android\build\commands\runAndroid\runOnAllDevices.js:94:5) at async Command.handleAction (D:\hiban_work\react\AwesomeProject\node_modules\@react-native-community\cli\build\index.js:186:9) info Run CLI with --verbose flag for more details.
I read a snippet and confusing and could not find the rules or principle to explain that,the output is Malibu,why not London,the adress: sherlock.address in let john = { surname: 'Watson', address: sherlock.address }; is to assign the value ofsherlock.adress to john.address,but not overwrite sherlock.adresswithjohn.address.How could I fiddle my hair.
let sherlock = { surname: 'Holmes', address: { city: 'London' } }; let john = { surname: 'Watson', address: sherlock.address }; john.surname = 'Lennon'; john.address.city = 'Malibu'; console.log(sherlock.address.city); //
Issues in above code 1)It runs two times in local that is first on pageload() and second time after scheduler runs.
when hosted on iis it requires classname to append to the url means the site should be like this "www.site.com/RequestToken.aspx" is it due to I have single page in my application that is RequestToken.aspx?
Should i need to remove the pageLoad() from the above code? should I need to add default.aspx and move the above code in that page to host in IIS?
A thread that is in a critical region is a thread that has entered a thread synchronization lock that must be released by the same thread. When a thread is in a critical region, the CLR believes that the thread is accessing data that is shared by multiple threads in the same AppDomain. After all, this is probably why the thread took the lock. If the thread is accessing shared data, just terminating the thread isn't good enough, because other threads may then try to access the shared data that is now corrupt, causing the AppDomain to run unpredictably or with possible security vulnerabilities. So, when a thread in a critical region experiences an unhandled exception, the CLR first attempts to upgrade the exception to a graceful AppDomain unload in an effort to get rid of all of the threads and data objects that are currently in use.
my question is, how can a thread in a critical region experiences an unhandled exception? Because when the thread entered a lock, the thead is suspended and waits for other thread to release the lock, if the thread is at idle(suspended), it doesn't execute any code, then how it is going to experience an unhandled exception?
E/Η300s: class com.example.vodafone_fu_h300s.screens.ConnectIntoRouterActivityOnly the original thread that created a view hierarchy can touch its views.
So how I can run an thread and provide Callbacks for any event such as In case that an error occurs of an Event for example a sucessfull credentials retrieval?
it works perfectly. But if I try to do it as expected in my code, it give me a permission error :
Traceback (most recent call last): File "D:\Programmation\Python\RecupData\RecupData.py", line 23, in <module> sftp.put(f"{local_path}/images",f"{remote_path}/imgs") File "C:\Users\Louis\AppData\Local\Programs\Python\Python39\lib\site-packages\paramiko\sftp_client.py", line 758, in put with open(localpath, "rb") as fl: PermissionError: [Errno 13] Permission denied: 'd:/images'
Is there a fix to get folders uploading ? Thanks a lot
I tried to set up a wordpress solution (installing by myself and not using an official image). I have one container with apache, php and mariadb-client (to interrogate mariadb-server from another container)
I have another container with mariadb on it.
I use wp-cli to configure wordpress website but when i build my docker image, I can't execute the command (inside sh file) which is
Hi everyone I have such a problem, After a user signs up for my site, for the first time, I want to log in with the user one more time.
I want that after he registers, connect again.
I tried to do it asynchronously, but it does not always work, sometimes I try to log in before the user is registers, I do not know why it does not work.
I want there to be a login only after registration, to force it.
As you can see, the if condition has no statements, and you click on an image, it gets a red border, then when you click on another image, what is expected to happen is for that second image to also get a red border, and now both images should have a red border, since at no point was the border of the first image removed. Yet, it does get removed because of the if condition. If you go ahead and comment out the if condition, you will see that the border remains on all images as you click. Why?
I am trying to use programming to increase my understanding of Fourier optics. I know that physically and mathematically the Fourier transform of a Fourier transform is inverted -> F{F{f(x)} = f(-x). I am having two problems 1) The second transform doesn't return anything like the original function except in the simple gaussian case (which makes it even more confusing), and 2) there seems to be some scaling factor that requires me to "zoom in" and distort the transformed image to a point that it is much less helpful (as illustrated below).
I'm trying to fetch the data from the website. I have managed to scrape the data from the first page of the website.
But for the next page website loads data using AJAX, for that, I set headers but couldn't able to get the data from the next page.
If we send requests to the website without headers the same data we get. So maybe I didn't set headers in the right way to move to the next page. I used CRUL for headers.
Where I did wrong?
class MenSpider(scrapy.Spider): name = "MenCrawler" allowed_domains = ['monark.com.pk'] #define headers and 'custom_constraint' as page headers = { 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36', 'accept-language': 'en-PK,en-US;q=0.9,en;q=0.8', 'key':'274246071', 'custom_constraint':'custom-filter page=1', 'view' : 'ajax', '_':'1618681277011' } #send request def start_requests(self): yield scrapy.Request( url = 'https://monark.com.pk/collections/t-shirts', method = 'GET', headers=self.headers, callback=self.update_headers ) #response def update_headers(self,response): #extract all the 12 URLS from the page urls = response.xpath('//h4[@class="h6 m-0 ff-main"]/a/@href').getall() for url in urls: yield response.follow(url=url, callback=self.parse) #extract the infinite text as 'LOADING' load = response.xpath('//div[@class="pagination"]//span/text()').get() #Use if Condition for pagination if load == 'LOADING': page = 1 #define page no as key form dictionary key = self.headers['custom_constraint'] current_page = key.split('=')[-1] next_pag = page+int(current_page) filters = 'custom-filter page='+str(next_pag) self.headers['custom_constraint'] = filters #request againg to page for next page BUT THIS IS NOT WORKING FOR ME yield scrapy.Request( url = 'https://monark.com.pk/collections/t-shirts', method = 'GET', headers=self.headers, callback=self.update_headers ) def parse(self, response): ........
I have a feedback form on my site which isn't working in the sense that I can fill it out and it seems to be connecting to the DB but the info doesn't go through the DB, I'm not sure where I'm going wrong. Below is the DB connections using PHP
But I've never seen the syntax before, and I can't seem to find it documented anywhere. Is this just syntax sugar for using #call, or is there more to it? And I can't seem to find any doc on it. Is there some somewhere?
I basically have thousands of images of characters with black outlines, all of these images either have a white background or some graphic background, usually just a wood texture behind.
What I want is to create a function (opencv/pil/whatever) that will allow me to just autocrop these images, basically remove everything outside the character's outline.
On The left is the original, uncropped image, on the right is the cropped image. Is this even possible?
Blockhound detected a blocking call in WebClient for an SSL connection, specifically at java.io.FileInputStream.readBytes(FileInputStream.java) where, I believed, it read the trust store for CA certificates. Does WebClient have an alternative/fix to this? Or, this is not an issue? Thanks
I am using Adopt JDK 1.8.0-275 and Spring Boot 2.3.2.RELEASE
The program works fine as a Python script (.py file) but doesn't work when I convert it to a Windows Executable file (.exe file) using Pyinstaller. Pyinstaller keeps giving me the error: Fatal error detected. Failed to execute script bot. I looked at some other posts but they didn't help me.
I'm working through algoexpert.io coding challenges and I'm having trouble undersatnding the suggested solution to one of the questions titled Non-Constructible Change
Here's the challenge question:
Given an array of positive integers representing the values of coins in your possession, write a function that returns the minimum amount of change (the minimum sum of money) that you cannot create. The given coins can have any positive integer value and aren't necessarily unique (i.e., you can have multiple coins of the same value).
For example, if you're given coins = [1, 2, 5], the minimum amount of change that you can't create is 4. If you're given no coins, the minimum amount of change that you can't create is 1.
// O(nlogn) time, O(n) size. function nonConstructibleChange(coins) { coins = coins.sort((a, b) => a - b); // O(nlogn) time operation let change = 0; for (coin of coins) { if (coin > change + 1) return change + 1; change += coin; } return change + 1; }
My problem
I am not completely sure how did the author of the solution come up with the intuition that
if the current coin is greater than `change + 1`, the smallest impossible change is equal to `change + 1`.
I can see how it tracks, and indeed the algorithm passes all tests, but I'd like to know more about a process I could use to devise this rule.
Thank you for taking the time to read the question!
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
//2. Get the distance of the mouse sliding on the dom
//3. Change the height of the dom
//4. Change the playback speed of the video
//Get the corresponding dom structure
var speed=document.querySelector('.speed')//Supplement: getElementsByClassName is to get the class selector
var bar=document.querySelector('.speed-bar')
var video=document.querySelector('.flex')
speed.addEventListener('mousemove',function(e){ //In simple terms, it points to the current event (click, mouseover, etc.), and saves the current event information. For example, mouse click events, there is mouse coordinate information .
//console.log(e);
var y=e.pageY-speed.offsetTop //The distance of the mouse in the right container offsetTop is the distance from a certain dom structure to the top of the browser
var percent = y / speed.offsetHeight //offsetHeight is to get the height of a certain dom structure itself
var min = 0.4 //Set the speed limit
var max = 4
var playbackRate = percent * (max-min)+min //Double speed calculation
var height = Math.round(percent * 100)+'%'//Math.abs() also takes the absolute value
bar.textContent = playbackRate.toFixed(2)+'' //Change the text content in the dom toFixed(x) Keep x decimal places
video.playbackRate = playbackRate //Adjust the video playback speed
bar.style.height = height //Adjust the display height of the multiple text
})
//Note: The functions of the two parameters of the function are: monitor mouse click events. Define the function in the function and become the callback function.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
for (var i = 0; i < global.maxItems; i += 1) { var ix = x1+24+(i * 40); var iy = y2-24;
draw_sprite(spr_border,0,ix,iy) button[i].x = ix; button[i].y = iy; } draw_text(x1+100,y1+100,"V to show / hide - Click and Drag Items With Mouse##P to Pick Up##Pick Up Bag For Extra Room To Store Items"); }
This script makes used of dlib library to calculate the 128-dimensional (128D) descriptor to be used for face recognition. Face recognition model can be downloaded from: https://github.com/davisking/dlib-models/blob/master/dlib_face_recognition_resnet_model_v1.dat.bz2 """# Import required packages:importcv2importdlibimportnumpyasnp# Load shape predictor, face enconder and face detector using dlib library:pose_predictor_5_point=dlib.shape_predictor("shape_predictor_5_face_landmarks.dat")face_encoder=dlib.face_recognition_model_v1("dlib_face_recognition_resnet_model_v1.dat")detector=dlib.get_frontal_face_detector()defface_encodings(face_image,number_of_times_to_upsample=1,num_jitters=1):"""Returns the 128D descriptor for each face in the image"""# Detect faces:face_locations=detector(face_image,number_of_times_to_upsample)# Detected landmarks:raw_landmarks=[pose_predictor_5_point(face_image,face_location)forface_locationinface_locations]# Calculate the face encoding for every detected face using the detected landmarks for each one:return[np.array(face_encoder.compute_face_descriptor(face_image,raw_landmark_set,num_jitters))forraw_landmark_setinraw_landmarks]# Load image:image=cv2.imread("jared_1.jpg")# Convert image from BGR (OpenCV format) to RGB (dlib format):rgb=image[:,:,::-1]# Calculate the encodings for every face of the image:encodings=face_encodings(rgb)# Show the first encoding:print(encodings[0])
ImportsSystem.Collections.GenericImportsSystem.ComponentModelImportsSystem.DataImportsSystem.DrawingImportsSystem.LinqImportsSystem.TextImportsSystem.Threading.TasksImportsSystem.Windows.Forms' make sure that using System.Diagnostics; is included ImportsSystem.Diagnostics' make sure that using System.Security.Principal; is included ImportsSystem.Security.Principal' make sure that using System.Net; is included ImportsSystem.NetPublicClassForm1PublicSubNew()MyBase.NewInitializeComponentEndSubPrivatewebClientAsNewWebClient()PrivateSubwebClient_DownloadProgressChanged(senderAsObject,eAsDownloadProgressChangedEventArgs)DimbytesInAsDouble=Double.Parse(e.BytesReceived.ToString())DimtotalBytesAsDouble=Double.Parse(e.TotalBytesToReceive.ToString())DimpercentageAsDouble=bytesIn/totalBytes*100progressBar1.Value=Integer.Parse(Math.Truncate(percentage).ToString())EndSubPrivateSubwebclient_DownloadFileCompleted(senderAsObject,eAsAsyncCompletedEventArgs)MessageBox.Show("Saved as C:\MYRAR.EXE","Httpdownload")EndSub#Region "basic function for app" PrivateSublblLink_Click_1(senderAsObject,eAsEventArgs)HandleslblLink.ClickProcess.Start("www.vclexamples.com")EndSubProtectedOverridesFunctionProcessCmdKey(ByRefmsgAsMessage,ByValkeyDataAsKeys)AsBooleanIf(keyData=Keys.Escape)ThenMe.Close()ReturnTrueEndIfReturnMyBase.ProcessCmdKey(msg,keyData)EndFunction#End Region PrivateSubbutton1_Click(senderAsObject,eAsEventArgs)Handlesbutton1.ClickAddHandlerWebClient.DownloadProgressChanged,NewDownloadProgressChangedEventHandler(AddressOfwebClient_DownloadProgressChanged)AddHandlerWebClient.DownloadFileCompleted,NewAsyncCompletedEventHandler(AddressOfwebclient_DownloadFileCompleted)' start download WebClient.DownloadFileAsync(NewUri(textBox1.Text),"C:\\MYRAR.EXE")EndSubPrivateSubForm1_Load(senderAsObject,eAsEventArgs)HandlesMyBase.LoadEndSubEndClass
Borderlands 2 VR has a very irritating setup where if you set your right stick to free turn (as you almost certainly want to for seated play) it does not remove the bindings to allow weapon selection using the right stick. This is super inconvenient as generally you don't want to change weapons when you turn. There is a sort of workaround of only equipping two weapon slots, but generally speaking, it's useful to be able to equip all four weapons.
Is there any way to disable weapon swapping using the rick stick, and just have the B button cycle weapons? There doesn't seem to be a UI option to do this, but maybe it's achievable by editing config files?
Our old Minecraft disk is scratched and hardly readable (needs toothpaste treatment on every insert) so we bought a new disk, thinking we could switch to that without problem. But the console (PS5, but the games are PS4 version) treats them as two separate applications.
The problem is that the old worlds are only visible when playing from the old disk, and it treats the new disk as a completely different save folder - nothing visible there.
How can I move the old worlds to the new copy of the game?
I tried backing up to a USB from old and new, and I can see that the saved worlds from each version are in two separate folders, CUSA00744 and CUSA00265. I tried to move the old worlds to the new folder, but if I do that it complains that the save data is corrupt.
Just to be clear: both the old worlds and the new worlds are in Bedrock format, so that is not the issue.
I just completed all the side-ops in MGSV. As discussed in this post some side-ops are light up again after you complete them so you can always do a side-op again even after completing all of them. However there is a challenge task, "complete all side-ops" that I would like to check off. But even though I have completed all the side-ops there is a caption in the lower right of the side-ops list that says "Completion 135/157".
I downloaded a Minecraft map but when I open the the doors there is a text that comes on the screen that says "open and close". How do they do this? Do they use commands?
this is a wireless gamepad receiver board the problem is the LINK LED doesn't blink and so the gamepad can't connect. can somebody give me a clue where to check or where could be the problem? no physical accident happen to it btw
So the thing is I want to ban the players who try to break into a house from my server. How can I do that I also want it to be automatic and I tried test for but it did not work
I used a command block that was set to repeat with the command /summon slime and accidentally turned on a lever next to it. Now my screen is full of slimes, the world is lagging, and I can't turn off the lever as the game won't completely load. Is there a way to stop the mobs?
Palicoes and palamutes (aka buddies) are used for a variety of things, including trading via the Argosy. There are several skill for buddies to use when trading, but they all have a level requirement. Due to this, I'd like to level up my buddies as quickly as possible. However, I only like bringing my "original" buddies along with me on quests (I named them both after IRL pets, so I just can't see the look of rejection in their eyes when I leave them behind while I go off killing monsters without them).
Due to all of the above, I'm trying to figure out what's the fastest way to level up my buddies, without bringing them on quests. I suspect it's sending them along on meowcenaries quests, but I haven't been paying close enough attention to my buddy levels to be sure. Even if meowcenries is the fastest, I'm forced to wonder if factors such as the type of quests they're sent on will make a difference in the amount of experience they gain.
What's the fastest non-quest way to level up my buddies?
Basically, I'm trying to make a "One command creation", which is, if you don't know, when you have an entire creation/project in one long command.
My creation makes a box, and then puts command blocks inside that box that will do stuff, all generating from a single command. One of the command blocks in that box has some quotes in it that it needs to be able to run, but the long command that makes the command block can't have those quotes for some reason. How can I get around this?
Here's the entire command (I believe some parts of it might be useful to this but I'm not sure):
Little explanation - It summons a redstone block with an activator rail on top, thus powering the activator rail. Then it spawns a command block in a minecart (CBM) which goes on a powered activator rail, thus activating the command in the CBM. That command sets a block inside my box to be a repeating command block with a command. The problem is in this command - specifically, at id:"minecraft:iron_block". The problem is that the repeating command block needs those quote marks, but the long command can't have those quotes or it doesn't work.
Apologies if anything was confusing. If anyone can solve this issue I'd greatly appreciate it, as this is only one of many times this has happened.
I have a picture of what it looks like but it says its to big, it says java sprit error and my game looks like I turned on a super secret setting from 1.8 the staticky one.
I plan on building a large base in Minecraft that will involve some white and gold in some parts, and while a gold farm in the nether will do for the gold blocks, I still want to have some bone blocks in place of white concrete for some texture. I know that I can make automated villager carrot or potato farms, then have the food go into an autocomposter which would make bone meal that I could craft into bone blocks, or have a mob farm to produce bones to craft into bone blocks. I need to know which one would be more efficient, but I would like it to be fairly cheap and I am still early in the game, without many resources. Does anyone have any suggestions or designs for a bone/bone meal farm that would work best for this?
Im trying to make Minecraft Dungeons in vanilla Minecraft but I just realised something: "Since grass spreads how am I going to keep the dirt as dirt?", so is there some sort of gamerule or something to keep the dirt I place AS dirt. This is in java 1.16.5, and no I cant use coarse dirt.
This is my 10th failed mob grinder i havent got one to work i've followed multiple tutorials all leading to nothing. Caves in the area have been lit up and this was built near land but above the ocean. The close land has also torches and mobs don't spawn there. My friend built one on land no cave lighting or anyhting this was also built in survival if that changes anything. The spwan platforms are 9X8X4. I read that i should build it higher up, my previous attempts were higher up and they didnt work either
I'm trying to have a friend join my Minecraft server. For some reason, when he joins the server, his avatar just stays in one place and then after about 15 seconds he is kicked off the server. The weird thing is, he is able to move around on his end for a little bit but all I see on my end is him trapped in one spot.
We both have official Minecraft licenses (As I paid for them both) and are playing on the latest version. The server is a Linux server running on AWS and is running the java dedicated server version 1.16.5. I've been using this server successfully with other players for over a year and I haven't seen this problem. No firewall rules are enabled on the server. I'm wondering if there is some setting that I'm missing. Any ideas?
Can you play GTA 5 disc on PS4 online? I know you need to have PS+ if you're buying game from PSN to play it but I want to buy disc so I don't have to get PS+.Is it possible?
How do I summon something with velocity in Minecraft? I tried searching it on google and click the links I can find. When I tried them, it didn't work. How do I do it?
So I'm trying to create a scoreboard that tracks the kills of players, but I only want it to increase when they kill a specific player, in this case just the player IGN would do.
I am trying to make a map and I have a command block to detect when the player is on a gold block and a chain command block to say the message in chat but it spams it until I get off the gold block, Is it possible to make so it'll only say it once? (The repeating command block command is: execute as @a at @s if block ~ ~-0.35~ gold_block run spawnpoint @s ~ ~ ~ -90)
Disclaimer: I'm not sure if this is on topic but I'll try anyway. If not, tell me and I'll delete this question.
I have a Nintendo handheld console (New 2DS XL in my case) and the upper screen is broken. Obviously warranty doesn't cover this but I'd still like to get it repaired. Unfortunately I cannot find anyone in my country (Latvia) who would be willing to do it. I found YouTube instructions and 3rd party replacement parts on ebay, but I'd like to keep that as the last option. Seemed risky.
I tried contacting Nintendo in their support chat, but they're split in webpages for specific countries and I couldn't find a "global" webpage. So I went for the UK page (still close and speaks English), but they said they only repair devices from the UK and they cannot speak for other countries. So basically, no help there either.
Has anyone had a similar experience and have you found any solutions beyond self-repair or buying a new console?
Previously, in League of Legends, Co-op Vs. AI games would give different XP rewards based on the player's summoner level and the difficulty level of the bots. The old rates can still be see on the League of Legends Wiki here.
However, the table shown on that page is out of date, as it does not include the "intro" difficulty level or summoner levels beyond 30. The Riot Support page provides updated numbers for how summoner level affects XP rewards (assuming that "Level 30" means "Level 30+"), but makes no mention of how difficulty might affect those rewards.
Does difficulty still affect XP rewards from Co-op Vs. AI games in any way, or are rewards now consistent across all difficulty levels?
I can't spank the girl (Millie) hard enough to pass the mission. Some say turn on the Frame Limiter but I have tried more than 2 times and I still can't pass it.
I connected discord to my xbox live account and I played a game. I got out of it, later disconnected my xbox live account from discord, and it still says I'm playing the game I was playing even though I wasn't even on the xbox. It said I was playing a certain game for 22 hours, and I can't turn it off. I have the discord mobile app, and my friend took it off. Later I went back on that game and the same thing happened. How do I fix this?
I'm making a sniper with 1.12 commands. It's working very well (except for the fact that bows don't shoot perfectly) but I have a problem: arrow speed.
If you shoot an arrow at minimum speed, you can literally overtake the arrow and jump into it. That's not a very nice sniper -_- So I searched on internet and I only found one solution that blew up my world and is not survival friendly. So I'm asking you if you know anything to speed up an arrow (the arrows already have the tag of {Nogravity:1b}) you don't know in which direction the arrow is flying so I can't use the Direction:[0.0,0.0,0.0] tag. I also tried effect @e[type=arrow] 1 1 10 in a repeating command block.
I'm not a noob so you can trust me I didn't make [always active] mistakes or something.
The passive item Unity supposedly adds 2% of the damage of every other gun you're carrying to your currently equipped gun. How does it handle things like shotguns (which have multiple bullets that consist of most of their damage), charged weapons (which have different damage ratings), or explosive weapons (which deal at least part of their damage as an explosion)? How does it calculate the damage from beam weapons? The exact damage calculation is sure to be less than straightforward.
Wine 5.0.4 已经发布。Wine(Wine Is Not an Emulator)是一个能够在多种兼容 POSIX 接口的操作系统(诸如 Linux、macOS 与 BSD 等)上运行 Windows 应用的兼容层。它不是像虚拟机或者模拟器一样模仿内部的 Windows 逻辑,而是将 Windows API 调用翻译成为动态的 POSIX 调用,免除了性能和其它一些行为的内存占用,让你能...
function Ye() { return 256 * Math.random() | 0 } var rn = [3, 7]; function on(e, t) { void 0 === t && (t = Ye); var n = t() % 4, r = function(e) { if ("function" == typeof TextEncoder){ return (new TextEncoder).encode(e); } for (var t = unescape(encodeURI(e)), n = new Uint8Array(t.length), r = 0; r < t.length; ++r) n[r] = t.charCodeAt(r); return n }(JSON.stringify(e)); console.log("17",r); var i = 1 + rn.length + 1 + n + 7 + r.length; console.log("19",t.length,rn.length,n,r.length,i); var o = new ArrayBuffer(i); console.log("21",o) var a = new Uint8Array(o) , u = 0 , s = t(); a[u++] = s; for (var c = 0, l = rn; c < l.length; c++) { var d = l[c]; a[u++] = s + d } a[u++] = s + n; for (var f = 0; f < n; ++f) a[u++] = t(); var v = new Uint8Array(7); for (f = 0; f < 7; ++f) v[f] = t(), a[u++] = v[f]; for (f = 0; f < r.length; ++f) a[u++] = r[f] ^ v[f % 7]; console.log("39",o); return o } on('{"name":"abcde","password":"15e2b0d3c33891ebb0f1ef609ec419420c20e320ce94c65fbc8c3312448eb225"}')
--------------------------------------- *OpenVINO and the OpenVINO logo are trademarks of Intel Corporation or its subsidiaries. ----------------------------- OpenVINO 中文社区 微信号 : openvinodev B站:OpenVINO中文社区 "开放、开源、共创" 致力于通过定期举办线上与线下的沙龙、动手实践及开发者交流...
GNU 宣布正式推出 GNU Assembly 及其网站,并表示该网站将成为 GNU 软件包开发人员的协作平台。 文中表示,该项目起源于 10 年前 GNU Guile 的 Andy Wingo 给 GNU 维护人员发的一封电子邮件,希望能有一个 GNU 项目的集体决策论坛。如今该愿景已成为现实。 Assembly 是 GNU 工具链项目的新平台,目前容纳了大约 30 个 GN...
GNU 宣布正式推出 GNU Assembly 及其网站,并表示该网站将成为 GNU 软件包开发人员的协作平台。 文中表示,该项目起源于 10 年前 GNU Guile 的 Andy Wingo 给 GNU 维护人员发的一封电子邮件,希望能有一个 GNU 项目的集体决策论坛。如今该愿景已成为现实。 Assembly 是 GNU 工具链项目的新平台,目前容纳了大约 30 个 GN...
Is this normal behavior for MacOS Big Sur ZSH (Z Shell) terminal behavior? Sometime's shell commands such as find and or launchd as just a simple example execute properly. This is with the exclamation symbol "!" in front, leading with the expected output and no privilege escalation.
For example, running: !find / will still work. Or !sudo will execute.
But !echo '@' will not. But lead to the next command sometimes.
Output for !echo
zsh: event not found: echo
Output for !launchd
launchd
launchd cannot be run directly.
It seem's weird !sudo output's sudo syscallbypid.d
Hi I am creating Keynote masters and every time I insert a picture it puts an icon on it, helpfully to allow you to insert a photo from your file. It doesn't show up in the final keynote but it is a pain in the neck. How do I delete it
A friend of mine has a Mid 2012 MacBook Air running Catalina, and he's trying to get the Android emulator BlueStacks running on it. He gets an error saying "System extension blocked. Enable the extension from Security & Privacy System Preferences pane by clicking 'Allow' button and BlueStacks will launch again." I looked in that pane, and couldn't find anything like that. I'm obviously missing something here but I don't know what!
tell application "Notes" set theMessages to every note repeat with thisMessage in theMessages set myTitle to the name of thisMessage set myText to the body of thisMessage set myCreateDate to the creation date of thisMessage set myModDate to the modification date of thisMessage tell application "Evernote" set myNote to create note1 with text myTitle title myTitle notebook "Imported Notes" tags ["imported_from_notes"] set the HTML content of myNote to myText set the creation date of myNote to myCreateDate set the modification date of myNote to myModDate end tell end repeat end tell
And it gives me the error: Syntax Error Expected end of line but found identifier.
I'm experiencing lower than expected battery life on my 2015 Macbook Pro. I believe that part of the problem is that the High Performance GPU is being activated when not needed by some tab in Safari:
This is NOT a CPU related problem, as the CPU tab in Activity Monitor shows little activity:
How can I tell which tab is requiring the high-performance GPU?
I need to access data about GPU and screen resolution in C, not using system_profiler, because it takes too long(.3s-.5s system_profiler SPDisplaysDataType), and needs greping and cutting, which is not that fast either.
Caching would be answer, but very short-sighted as someone can use different monitors etc.
When my friend syncs their iPhone over USB with their Mac, contacts and calendar entries they create on their Mac are not copied to their iPhone, but contacts and calendar entries they create on their iPhone do get copied to their Mac. How can we get this data syncing bidirectionally again?
This data used to sync in both directions correctly, but stopped doing so around the time they updated to iOS 14.4.2 (it may very well have started before the update; they first noticed about a day after updating). It's hard to tell when exactly it started exhibiting this behavior because there's no error message when they sync. The behavior we do observe is that once the sync gets to the calendar step, it sits there indefinitely until the sync is cancelled. If we turn off calendar sync, it gets to the contacts step and similarly sits there indefinitely until the sync is cancelled.
They do not use iCloud. They're running macOS Big Sur 11.2.3 and sync using the Finder. They use Apple's Contacts and Calendar apps on both their phone and computer. "Sync contacts onto iPhone", Sync "All Groups" is selected, and "Sync Calendars onto iPhone", Sync "All Calendars" is selected.
We've tried booting the Mac into safe mode, but this appears to completely disable whatever Finder component handles iOS syncing. We also tried performing a sync while streaming the AMPDevicesAgent logs, but when the sync hangs, the logs just show a bunch of "sending Ping for device" and "got Ping message for device" messages with no errors until we cancel or disconnect the phone.
Something strange we noticed that may or may not be relevant is that in the Finder sync settings, the box for "Add new contacts from this phone to 'ALC Board'" is checked (this is one of their groups in Contacts) even though they never chose it. However, when they try to pick a different group for the destination, it only shows a list containing their other groups but not the All Contacts folder. If they uncheck this box, and click Apply, after syncing the box comes back checked (sometimes a different group name will appear). I am unsure whether this is relevant because they can sync from their phone to their Mac.
Similarly strange, in the settings from Calendar sync, the 'Do not sync events older that — days' is currently unchecked. However, originally it was checked and set to 730 days. When they set it to 0, it was supposed to transfer all calendar entries from Mac to phone, but it did not (and did not show an error message).
Background: I'm using a MacBook Pro with M1 chip and Big Sur V11.1 (got it last month). One thing really annoys me is that when I connect my iphone, sync starts automatically and it takes up 20GB to when all iphone files are backed up.
Issue: So I wanted to delete the sync files using "Reduce Clutter" in "About This Mac -> Storage". It worked fine for two weeks after I purchased the MacBook, but recently it starts to crash every time when I open "About This Mac -> Storage -> Manage" and I wasn't even able to get the time to click "Review Files" under "Manage". I tried to restart the system but had no luck at all. See screenshots below for crash report.
Help: Does anyone else encounter the same issue? How to resolve it? Is there a way (say, command line) to delete synced iphone files without clicking "About This Mac -> Storage"?
"Manage-> Review Files" is a really convenient way to view and remove redundant files to save storage. It would be great if there is a way to revive that functionality if possible.
As it seems there are suggestions for how to replace a character or a string with newline in Excel for Mac, like this one. But trying to reverse the process does not work; e.g., to type CTRL+J or ALT+0010 etc. Any advice on how to replace newlines in cells with, for example, space?
Today I bought a 2017 Macbook that has a water damaged LCD and a faulty closed-lid sensor. My intention is to factory reset it and use it as a build system (i.e. develop my app on my desktop, then remotely connect into the machine to compile / distribute).
When in MacOS, I can ⌘ Command+F1 to mirror to an external display, but obviously that doesn't work in recovery mode. One suggestion was to boot to recovery, then simply close the lid to make the external display the primary display, but with a faulty "lid closed" sensor, I can't do that.
Another suggestion was to close the lid, then run a fridge magnet over the corner of the unit to activate the closed-lid sensor. I tried this, but it just puts the unit to sleep, and I need to open the lid to start it back up.
Someone else suggested attempting to drag the window over from one screen to another, but I tried a bunch of times and had no luck because it's a bit of a stab in the dark to hit something that thin.
I have a dock at work that I intend to try (so I can close the lid, do the magnet, then use an external keyboard to try and wake the device), but until I can get to it, I'm wondering if there's a way to mirror the display, or at least move the window so I can proceed with reinstallation of MacOS?
I have 8 different email accounts configured on my iPhone. It's a pain to move them one-by-one to the iPad. If I wipe the iPad I can move them all over at the same time. Is there any way to do this without wiping the iPad? I just got a new email account and don't want to manually configure each device.
I have Unity 2020.3.0f1 Mac apps that were created on end-of-life High Sierra computers. I recently purchased a Mac Mini M1 computer. The apps run on both computers. I have to execute a terminal command on them to get them to open on the M1 computer.
Apple changed the App Store Upload process so that now we have to notarize that new apps will run on Mojave. I don't have a computer that runs Mojave. From my understanding you can't test Unity builds using the Apple Simulators. Is there another option to test my apps to see if they will run on Mojave?
I have a 13-inch MacBook Air Early 2015, resolution 1440 × 900 with thunderbolt 2 and USB-A. I want to connect it to a bigger display. Right now I am considering to buy an LG 27UK850-W 4K 27 inch Monitor (resolution 3840x2160) which has HDMI, USB-C, DisplayPort and USB Downstream Port. Can I connect my MacBook and the monitor with the following connections:
MacBook Air -> Thunderbolt 2 Cable -> Thunderbolt 3 (USB-C) to Thunderbolt 2 adapter -> LG 27UK850-W 4K Monitor
MacBook Air -> USB-A to HDMI adapter -> HDMI -> LG 27UK850-W 4K Monitor
Would there be any limitations with the display quality? Which one is the best way to connect the devices or do you have any other suggestions?
Recently, Google Chrome keeps loosing the ability to detect my location. When I open Google Maps, it has the icon indicating This site has been blocked from accessing your location and when I click it, a dialog containing Location is turned off in Mac system preferences is shown:
So I have to go to Enable Location Services:
But I am sure that I have done this several times recently. It seems that this setting is lost regularly (maybe for every Chrome auto-update?). How can I make this setting permanent?
Whenever I open Google Chrome (including Chromium and Canary) on my Mac the location is set to off. So I must go to Preferences -> Security & Privacy -> Privacy -> Location Services and set Canary's location setting on.
However, I'm not sure why Chrome's location setting is set to off whenever I quit the app. How can I make the location setting permanent?
I lock the key icon on the privacy setting once I checked on the Chrome's location setting but it didn't work... I use macOS 11.0.1 (updated to 11.2 now) and found this is true on multiple macs.
UPDATE
I found out that this problem happens on all Chrome variants. Also, this even happened while I'm running the app.
I can add date, links, page numbers, but I would like to display as well the chapter titles in the header and it should be updated automatically when a chapter's title changes. Any ideas? I'm using pages 10.2 (7028.0.88)
Somehow something seems to have been changed with my Zsh configuration or Terminal settings under macOS Catalina (10.15.7). If I copy some text, say from a TextEdit window, the usual paste command and shortcut-key (⌘ CommandV) no longer paste that text onto the command line in Terminal.
Consider this PDF file for example. The text in this file appears scrambled when opened with Safari or Preview. However, the PDF is formatted fine when opened with Adobe Acrobat Reader DC or most 3rd party web browsers including but not limited to Gecko-based Firefox and Chromium-based Google Chrome, Microsoft Edge and Opera.
I wondered if it is because of some of the embedded fonts in the PDF file which perhaps I needed to install on my mac. So, I searched for these fonts and installed them on my system but to no avail. Perhaps it is a bug?
How would you deal with such PDF files when you want to primarily use Preview?
EDIT: I am using macOS Catalina 10.15.6 and Preview 11.0. In the attached screenshot, the left rendering is by Preview and the right one is by Adobe Acrobat Reader DC.
I tried to google and found this question https://stackoverflow.com/questions/34451126/xcode-crashes-on-launch and tried to delete DerivedData folder but this hasn't resolved the problem and it keeps crashing. I don't have admin credentials on my machine, so I'm severely limited in what I can change.
Do I have to reinstall XCode or is there any other solution or workaround?
I just got a new MacBook and a QGeeM adapter between USB and USB-C. Trying to connect an older Apple DVD drive that has the usual USB connector. The new MacBook doesn't recognize that a device has been connected.
Tried restarting MacBook, connecting QGeeM to different USB-C and the DVD drive to different USB ports of QGeeM. At some point a popup appeared that the external device needs power and should be connected to a USB port, which makes no sense because it is. Tried to chat with Apple, but they were extremely unhelpful and dropped the chat.
As earlier described in this question, I was asked today by my iPad to enter the user account password of my MacBook, and my MacBook later asked me to provide the passcode of my iPad. I'm specifically not talking about my iCloud password, the actual device passwords were requested and this behaviour is kind of known.
Now my question is: What happens to these passwords? Are they sent over to Apple? Are they used locally to decrypt something that has been encryted with that passcode?
So is Apple in possession of data that can be used to run brute-force attacks agains my device passwords? This would be totally against the idea of the T2-chip limiting brute-force attacks and I never agreed that such information is sent over to Apple and at no point was I informed about that. I'm not sharing my KeyChain with iCloud and do not wish to.
I couldn't find any exact information on that matter, anybody shedding some light on it is highly welcomed.
The question is in the title, but anyway, let me explain it a bit more:
The most accepted way for correctly defining the install name for a dylib in MacOS is by making it relative to the rpath. For example:
otool -L ./LLVM/7.0.0/lib/libomp.dylib ./LLVM/7.0.0/lib/libomp.dylib: @rpath/libomp.dylib (compatibility version 5.0.0, current version 5.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1238.50.2)
Here, libomp.dylib has @rpath/libomp.dylib as its install name. So far so good.
The problem is when you create an executable linked to such a dylib. Even if you pass the correct -L/path/to/libomp.dylib flag at link time, so that ld can successfully link the executable, then you try to execute it, and obviously you get the following:
dyld: Library not loaded: @rpath/libomp.dylib Referenced from: mydumbexecutable Reason: image not found Abort trap: 6
This of course can be fixed by using install_name_tool on either the dylib (changing its install name so that it doesn't depend on the rpath, and linking the executable again, but this is not considered good practice), or, the recommended way, to use install_name_tool on the executable, adding to it the proper rpath so that the dylib can be found.
But... just wondering... isn't there a flag in ld that automatically adds the rpath for you? I mean, if ld is able to link the executable because it did find the dylibs, why cannot automatically store the proper rpath in the executable?
I understand this should be optional behaviour, as sometimes you prefer to define the rpaths yourself, but... a flag for doing it automatically would make my life a lot easier.
Too often, when I try to move my cursor and click, or try to drag something or select text, and my two fingers end up on the trackpad at the same time, all my open tabs shrink into this view:
This is extremely disruptive as it makes me lose my focus and then I have to locate the page I was currently on, click it and then resume working.
I can't tell what this specific feature in Safari is called (I want to still be able to pinch to zoom in and out of a page), and couldn't find info anywhere on how to completely disable it.
Can I disable this? (or I can use another browser, obviously...)
Is it possible to convert a Package into an App? I have a complete installer of El Capitan as a pkg file and I need to install it. But first I need to make it an app. I can't do it through the MAS because el Capitan is no longer available.
What programs have trouble with Case Sensitive (HFSX) systems?
What are the work-arounds?
In general, the problem is that the developers have a file in their app called FOO, but try to access the file by the name foo. In an HFS+ system that is case preserving but case insensitive, searching for foo will find FOO. That is not the case in HFSX. The general solution is therefore to
Find the misnamed file or folder
Make a copy, a link, or rename so the expected name is found
I have a Root OU that has an OU called "Clients" and under I have multiple OU's and the client's PC's/User Accounts in sub-OU's.
The issue is, my clients can see other groups' user accounts/computers and need to prevent this as if they're on completely different machines and not under the same Domain. I am guessing I have to go make Deny rules for every single OU Group about every Client OU Group?
Currently, they can search AD for users and see other clients (not within a said company).
Any thoughts on how to do it and potentially with Powershell or just in general?
I'm running ESXi 6.5 embedded host client. When i ssh into the system I can run esxcli vm process list and get the expected output:
testserver1 World ID: 67909 Process ID: 0 VMX Cartel ID: 67908 UUID: someuuid Display Name: testserver1 Config File: /vmfs/volumes/somelocation/testserver1/testserver1.vmx
But if i run esxcli vm process kill –t=soft –w=67909 I get the error Error: Unknown command or namespace vm process kill –t=soft –w=67909
To confirm i'm running the correct command, i ran esxcli vm process kill -help and get
Error: Invalid option -h Usage: esxcli vm process kill [cmd options] Description: kill Used to forcibly kill Virtual Machines that are stuck and not responding to normal stop operations. Cmd options: -t|--type=<str> The type of kill operation to attempt. There are three types of VM kills that can be attempted: [soft, hard, force]. Users should always attempt 'soft' kills first, which will give the VMX process a chance to shutdown cleanly (like kill or kill -SIGTERM). If that does not work move to 'hard' kills which will shutdown the process immediately (like kill -9 or kill -SIGKILL). 'force' should be used as a last resort attempt to kill the VM. If all three fail then a reboot is required. (required) -w|--world-id=<long> The World ID of the Virtual Machine to kill. This can be obtained from the 'vm process list' command (required)
Can you see anything i'm doing wrong that might be preventing this command from working? I realize there's vim-cmd alternative in docs but i'm trying to figure out why the first option from the docs is responding like it's not even a valid command.
Before the update on Microsoft Azure deployment Center i can connect my gitlab repository and the portal successfully fetched the commits. But after the update i cannot deploy my gitlab repository to the Azure portal the same way. Does anyone know how to fix this?
My problem is with caching. In my own proxy I cache the first certificate for both *.redacted.com and redacted.com, but then, when I visit the second host, I reuse the first certificate because *.redacted.com matches foo.redacted.com.
I can easily add a sort of "specificity rule", since foo.redacted.com seems more specific than *.redacted.com, but I'd like to know whether there is such a rule or the two certificates shouldn't overlap.
i want to change a privilege to my glpiuser from 'N' to 'Y' in mysql server what is the command to do this task her's the image that display my users i'm using Ubuntu 20.04.1
Say for example during a DNS migration to cloudflare, rather than transfering to "fred.ns.cloudflare.com" you typo'd "ferd.ns.cloudflare.com" or something similar for NS1, and the same kind of thing for NS2.
You realise this after the change has propagated, so you can no longer edit DNS on your original DNS host, but cloudflare never receives the domains.
Is there a way to recover from that kind of situation / would the transfer fail in the first place or something similar, or would you effectively just lose control of your DNS?
-- Not something that's actually happened to me, but something of a potential nightmare scenario that I can't find any information on, which makes me think I may be overly worried about nothing?
I'm using RHEL 8, and I have run into a crazy problem. My user account is unable to open PHP files.
If I have a file, owned by my user, and readable by my user, and I add <?php as the first line, I'm suddenly unable to open, edit, or view the file, even though I have not otherwise changed my permissions. It tells me: cat: test.txt: Operation not permitted
If I look at the file using file, I see the file reported as PHP Script once I add the above line.
It doesn't appear to be an SELinux problem, since setenforce 0 doesn't change the behavior, and audit2allow doesn't see anything.
It's possible this is happening to all script files, but on this server, I only need to use PHP scripts. Help!
I am trying to create an EC2 instance (Amazon Linux, so I shouldn't have to configure the SSM agent as it should be autoconfigured) in a private subnet, and want to be able to SSH into it. According to this post I have to use AWS Systems Manager for this. I've done quite a bit with codestar/beanstalk before, but now simply want to be able to create and delete everything via the AWS CLI manually for learning purposes.
Here are the commands I'm able to run fine (the ec2 instance is created succesfully with my role)
aws iam create-role --role-name ec2-role --assume-role-policy-document file://roles/ec2-role.json aws iam attach-role-policy --role-name ec2-role --policy-arn "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore" aws iam create-instance-profile --instance-profile-name ssm-instance-profile-for-ec2-instances aws iam add-role-to-instance-profile --instance-profile-name ssm-instance-profile-for-ec2-instances --role-name ec2-role // Creating the EC2 instance aws ec2 run-instances --image-id ami-0db9040eb3ab74509 --count 1 --instance-type t2.micro --key-name key-pair-for-instance1 --subnet-id <my_valid_subnet_id> --iam-instance-profile Name=ssm-instance-profile-for-ec2-instances
On my personal computer, I can only ping hostnames using the FQDN when using version 3.2.3 and version 2.7.1. Yet I am able to ping by hostname and by FQDN when using version 2.5.1.
However, one of my field co-worker's computer can ping hostnames without using the FQDN when using version 3.2.3. Sadly, they can't ping hostnames without using the FQDN when using version 2.5.1. I didn't try version 2.7.1, since version 3.2.3 worked.
Both of these computers are running Windows 10 and fully updated.
We find that for some servers io disk read is very high. We also notice that there are many major page faults on those servers. But we checked the /proc/zoneinfo, there are enough free pages. Here is the content of /proc/zoneinfo:
pages free 3913507 min 11333 low 14166 high 16999 scanned 0 spanned 16777216 present 16777216 managed 16507043 nr_free_pages 3913507
We also use "perf" to monitor the event of "mm_filemap_delete_from_page_cache". Here is the result of perf:
I'm unable to SSH into a server from one machine on my network. I can successfully SSH using the exact same port, address, user, and ssh key from other machines on my network. When I try to connect, half of my MOTD is printed out and then the connection hangs. I figured it might be an issue with my terminal reading the MOTD, but I've tried several different terminals with the WSL bash shell and the problem is consistent.
What could the issue be, or what would be the next step to diagnosing this?
The server is running Ubuntu 20.04.2 LTS and OpenSSH 8.2p1
Where am I making a mistake? Do I enter the dedicated IP in the fields? I could not understand! Why am I getting this error? How should it be properly configured?
I've a brand new WD RED 6 TB HDD (WD50EFAX) in my HP Microserver Gen 8 running Debian 10. I used LVM caching for years, to improve reading performance.
Today, I investigated a performance bottleneck when copying large files over SMB. It resulted in a dd test
Disabling the LVM caching of the WD RED, this value was increased to 120 MB/s which is usual for such HDDs, I guess. The bottleneck occurs after a few hundred MB have been written. My cache size is 10G as you can see below.
The HDDs own write cache is disabled hdparm -W0 /dev/sdb I double checked this.
So, what could cause the LVM cache to slow down write performance? The cache type is write-through so it should work as pure read cache.
So, I'm almost finished building my first major production web app, and am wondering how to manage the backup protocol.
Cold backups via my hosting control panel seem ideal - but daily downtime sounds awful for UX. Hot backups seem pointless as they cannot be trusted to not be corrupt.
My server runs Debian.
Is there a way to SSH in and clone the filesystem on my local machine, before encrypting with Veracrypt, before posting it to the moon, as an off-planet backup? (Lol.)
I suppose I don't need to backup the entire fs on a daily basis, but definitely Mongodb. What's the easiest way to automate a mongodb backup?
I need to create an ansible-vault file to store credentials in a task in a playbook. This file would be used by another playbook. Is there an internal ansible method/module to accomplish this? I would prefer not to do it invoking shell/command. Any help would be highly appreciated.
For agent2 "Daemonization" is "no". Does it mean agent2 service does not run in background? That does not seem right...
Similarly "Drop user privileges" is "no". To me it sounds like the service would run as "root". However on testing, I can see that service is running as "zabbix" user.
Also, is there anything else I should know when using agent2? e.g. any limitations, gotchas?
I´m trying to export an external disk, so, I configured my nfs-server service to wait for disk1 to mount, however it fails.
This is the situation after boot:
$ systemctl status nfs-server.service ● nfs-server.service - NFS server and services Loaded: loaded (/etc/systemd/system/nfs-server.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Sun 2020-04-26 14:46:28 CEST; 3min 7s ago Process: 307 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=1/FAILURE) Process: 312 ExecStopPost=/usr/sbin/exportfs -au (code=exited, status=0/SUCCESS) Process: 314 ExecStopPost=/usr/sbin/exportfs -f (code=exited, status=0/SUCCESS) abr 26 14:46:28 raspberrypi systemd[1]: Starting NFS server and services... abr 26 14:46:28 raspberrypi exportfs[307]: exportfs: Failed to stat /media/pi/disk1: No such file or directory abr 26 14:46:28 raspberrypi systemd[1]: nfs-server.service: Control process exited, code=exited, status=1/FAILURE abr 26 14:46:28 raspberrypi systemd[1]: nfs-server.service: Failed with result 'exit-code'. abr 26 14:46:28 raspberrypi systemd[1]: Failed to start NFS server and services.
If I just restart the service it just works smoothly
$ sudo systemctl restart nfs-server.service $ systemctl status nfs-server.service ● nfs-server.service - NFS server and services Loaded: loaded (/etc/systemd/system/nfs-server.service; enabled; vendor preset: enabled) Active: active (exited) since Sun 2020-04-26 14:59:51 CEST; 4s ago Process: 943 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS) Process: 944 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS) Main PID: 944 (code=exited, status=0/SUCCESS) abr 26 14:59:51 raspberrypi systemd[1]: Starting NFS server and services... abr 26 14:59:51 raspberrypi systemd[1]: Started NFS server and services.
I configured the service to "Requires" and "After" disk1 mounts, but it didn´t work:
$ systemctl status media-pi-disk1.mount ● media-pi-disk1.mount - /media/pi/disk1 Loaded: loaded Active: active (mounted) since Sun 2020-04-26 14:47:34 CEST; 3h 22min ago Where: /media/pi/disk1 What: /dev/sda1 $ egrep -v '^#|^$' /etc/fstab proc /proc proc defaults 0 0 /dev/mmcblk0p8 /boot vfat defaults 0 2 /dev/mmcblk0p9 / $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931,5G 0 disk └─sda1 8:1 0 931,5G 0 part /media/pi/disk1 mmcblk0 179:0 0 29,7G 0 disk ├─mmcblk0p1 179:1 0 2,4G 0 part ├─mmcblk0p2 179:2 0 1K 0 part ├─mmcblk0p5 179:5 0 32M 0 part ├─mmcblk0p6 179:6 0 512M 0 part /media/pi/System ├─mmcblk0p7 179:7 0 12,1G 0 part /media/pi/Storage ├─mmcblk0p8 179:8 0 256M 0 part /boot └─mmcblk0p9 179:9 0 14,5G 0 part / $ mount /dev/mmcblk0p9 on / type ext4 (rw,noatime) devtmpfs on /dev type devtmpfs (rw,relatime,size=217076k,nr_inodes=54269,mode=755) sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd) bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=28,pgrp=1,timeout=0,minproto=5,maxproto=5,direct) debugfs on /sys/kernel/debug type debugfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime) sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime) nfsd on /proc/fs/nfsd type nfsd (rw,relatime) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/mmcblk0p8 on /boot type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro) tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=44280k,mode=700,uid=1000,gid=1000) gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000) fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime) /dev/sda1 on /media/pi/disk1 type ext4 (rw,nosuid,nodev,relatime,uhelper=udisks2) /dev/mmcblk0p7 on /media/pi/Storage type ext4 (rw,nosuid,nodev,relatime,uhelper=udisks2) /dev/mmcblk0p6 on /media/pi/System type vfat (rw,nosuid,nodev,relatime,uid=1000,gid=1000,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,showexec,utf8,flush,errors=remount-ro,uhelper=udisks2)
We just recently installed a new RHEL7 server. Inside of this server, we have a bunch of vhosts, and inside the vhosts there is a couple of lines that looks like this -
So in order to handle this, we use mod_jk inside of our apache configuration. However, when I try to start apache, I get the following error -
Syntax error on line 1 of /etc/httpd/conf.d/mod_jk.conf: Cannot load /etc/httpd/modules/mod_jk.so into server: /etc/httpd/modules/mod_jk.so: undefined symbol: ap_get_server_version
The mod_jk.conf file is inside of /etc/httpd/conf.d, and it looks like this -
LoadModule jk_module /etc/httpd/modules/mod_jk.so JkWorkersFile /etc/httpd/conf.d/workers.properties JkLogFile /var/log/httpd/mod_jk.log Change to WARN or ERROR for Prod JkLogLevel info JkShmFile /var/log/httpd/mod_jk.shm JkMount /rulesApi/rules/* rulesEngine JkMount /api/* rulesEngine JkMount /* rulesEditor JkMount /rules_editor/* rulesEditor
Any ideas as to what that error means, and how I can get httpd to start?
Output of sudo service mysql start command is mysql : Unrecognized service
Same way output of sudo service mysqld start is nothing
When I tried sudo service mysqld status it says stopped
I went through /var/log/mysql.log found this error :
2015-10-20 08:00:54 23694 [Note] InnoDB: 128 rollback segment(s) are active. 2015-10-20 08:00:54 23694 [Note] InnoDB: Waiting for purge to start 2015-10-20 08:00:54 23694 [Note] InnoDB: 5.6.21 started; log sequence number 1600607 2015-10-20 08:00:54 23694 [Note] Server hostname (bind-address): '*'; port: 3306 2015-10-20 08:00:54 23694 [Note] IPv6 is available. 2015-10-20 08:00:54 23694 [Note] - '::' resolves to '::'; 2015-10-20 08:00:54 23694 [Note] Server socket created on IP: '::'. 2015-10-20 08:00:54 23694 [ERROR] /usr/local/mysql/bin/mysqld: Can't create/write to fie '/var/run/mysqld/mysqld.pid' (Errcode: 2 - No such file or directory) 2015-10-20 08:00:54 23694 [ERROR] Can't start server: can't create PID file: No such fie or directory 151020 08:00:54 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended
For which I looked online and tried to fix issue which said to be of the permission, So I created to fix this error which /var/run/mysqld for mysql.pid and I did chown the directory to mysql:mysql
But still the problem persist. Can anyone help me out with this!
I have a woocommerce site. I have a recursive error in the Apache error.log:
[Mon Nov 02 17:04:58.723578 2015] [core:error] [pid 2922] [client 172.31.12.207:19044] AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: https://gremyo.com/wp-content/themes/bishop/woocommerce/style.css [Mon Nov 02 17:04:58.812460 2015] [core:error] [pid 2928] [client 172.31.12.207:19045] AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: https://gremyo.com/wp-content/themes/bishop/woocommerce/style.css [Mon Nov 02 17:13:58.112870 2015] [core:error] [pid 3100] [client 172.31.27.233:39991] AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. [Mon Nov 02 17:13:58.430530 2015] [core:error] [pid 2905] [client 172.31.27.233:39992] AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. [Mon Nov 02 17:23:23.530340 2015] [core:error] [pid 3205] [client 172.31.11.223:48080] AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://gremyo.com/wp-signup.php?new=publisherweb [Mon Nov 02 17:25:08.819153 2015] [core:error] [pid 3244] [client 172.31.27.233:40380] AH00124: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: https://muyinteresante.gremyo.com/
I have seen the error happens when a javascript fires up a window with the detailed images (referer ...style.css) in the single product page. The google-chrome console registers these errors:
Failed to load resource: the server responded with a status of 500 (Internal Server Error) https://gremyo.com/wp-content/themes/bishop/fonts/WooCommerce.woff Failed to load resource: the server responded with a status of 500 (Internal Server Error) https://gremyo.com/wp-content/themes/bishop/fonts/WooCommerce.ttf
I have this in the .htaccess file, related to chrome errors.
<IfModule mod_headers.c> <FilesMatch "\.(ttf|ttc|otf|eot|woff|font.css)$"> Header set Access-Control-Allow-Origin "*" </FilesMatch> </IfModule>
However, the error appear in more places of the site (I haven't identified them yet).
The reason to investigate this is the site doesn't load properly CSS in some product pages when they're cached. I use wp-super-cache and autoptimize plugins.
I found example.auth.conf in libraries sources tarball. With the help of strace i found the directory where openvasmd expected to find it's auth config:
PREFIX/var/lib/openvas/openvasmd/auth.conf
Due to empty PREFIX variable for me the path is following:
/var/lib/openvas/openvasmd/auth.conf
Then i raised the logging level 127=>128 for openvasmd (also runnig in verbose mode -v, cause without that flag interesting info could not be found in logs)
Following the information a found in mailing list archives (example yeah it's rather outdated =\) i added to the config:
I also commented method:file section for test purposes. But after the service restart and login attempt (using GSAD web interface) i found in openvasmd.log: lib auth:WARNING:2015-06-23 12h04.38 utc:15352: Unsupported authentication method: method:ldap.
And also the obvious result of login:
md omp: DEBUG:2015-06-23 14h33.05 utc:17775: XML start: authenticate (0)
... - setting my creds, by the way password in log file was in plain text format
First of all, i thought it was misconfiguration issue while compiling the libraries (without ldap support flag). But both libraries and openvas manager are linked with ldap libs (i also added ldap dev libs to the debian/control file as build dependencies for packages):
And i found no references of method:ldap in libraries source files. Only method:ldap_connect was found but it's so called "Per-User ldap authentication". If i correctly understand the conception it is an authentication mechanism for already created users with the right to authenticate via ldap, i've tested it and it works fine (this fact confirms openvas libraries/manager were compiled with ldap support). But it's not a full ldap integration feature i need.
service httpd restart Stopping httpd: [ OK ] Starting httpd: httpd: Syntax error on line 205 of /etc/httpd/conf/httpd.conf: Cannot load /etc/httpd/modules/mod_security2.so into server: /etc/httpd/modules/mod_security2.so: undefined symbol: ap_unixd_set_global_mutex_perms [FAILED]
I have a Debian 6 system running Samba 3.5.6 that has been successfully set up to authenticate against an Active Directory domain (via SSH that is). I have a directory (let's call it /foo) that I want to be editable by both local users and AD users. I have created a local group "fooedit" and added both the local users and domain users to it. I have set up the neccessary ACLs on /foo to allow fooedit users to edit the files and tested it to be functioning via SSH for both the local and AD users.
I would like the AD users to be able to edit via share as well, but can't seem to get the right configuration. They can see the share, but it prompts them for credentials when trying to access it and credentials don't work. Is this possible and if so what do I need to do it? I don't want to do this with an AD group if possible because I may need to do this on many machines with different users on each machine, so a local group would be cleaner.
Having a problem with the --message flag to the svn import command. On some servers it works, but on others it gets confused if the message contains spaces, even if you single or double quote the message string thus:
If I limit the message to one without any spaces, it succeeds everytime. Clearly the problem is with the command failing to recognise a quoted string, but why?
Differences between whether it succeeds or not seems to be down to the particular OS/Shell combination I'm using. The command works on SUSE 10.3 with Ksh Version M 93s+ 2008-01-31, but fails on RHEL 5.6 with Ksh Version AJM 93t+ 2010-02-02. Or perhaps that's a red herring, and the real problem is something else differing between environments?
I have some arbitrary number of servers with the same user/pass combination. I want to write a script (that I call once) so that
ssh-copy-id user@myserver
is called for each server. Since they all have the same user/pass this should be easy but ssh-copy-id wants me to type the password in separately each time which defeats the purpose of my script. There is no option for putting in a password, ie ssh-copy-id -p mypassword user@myserver.
How can I write a script that automatically fills in the password field when ssh-copy-id asks for it?
i'm having some problems when adding a second mailbox server to my DAG in Exchange 2010. The test setup goes like this: 1x windows server 2008 (DC/DNS) 2x windows server 2008 (Exchange 2010)
I have made sure all services are up and running and that the "Exchange Trusted Subsystem" account is set as a local admin.
When i create a DAG i can add the first mailbox server (A) without any problems, but when i go to add the second (B) it gives me an error saying "Unable to contact the Cluster service on 1 other members (member) of the Database availability group.
It does the same if i add (B) first and then try to add (A).
Here is a part of the log file:
[2010-04-05T15:00:27] GetRemoteCluster() for the mailbox server failed with exception = An Active Manager operation failed. Error: An error occurred while attempting a cluster operation. Error: Cluster API '"OpenCluster(EXCHANGE20102.area51.com) failed with 0x6d9. Error: There are no more endpoints available from the endpoint mapper"' failed.. This is OK.
[2010-04-05T15:00:27] Ignoring previous error, as it is acceptable if the cluster does not exist yet. [2010-04-05T15:00:27] DumpClusterTopology: Opening remote cluster AREA51DAG01. [2010-04-05T15:00:27] DumpClusterTopology: Failed opening with Microsoft.Exchange.Cluster.Replay.AmClusterApiException: An Active Manager operation failed. Error: An error occurred while attempting a cluster operation. Error: Cluster API '"OpenCluster(AREA51DAG01.area51.com) failed with 0x5. Error: Access is denied"' failed. ---> System.ComponentModel.Win32Exception: Access is denied --- End of inner exception stack trace ---
No comments:
Post a Comment